Search criteria

4335 vulnerabilities found for ipados by apple

VAR-202301-1703

Vulnerability from variot - Updated: 2025-12-22 23:37

The issue was addressed with improved memory handling. This issue is fixed in macOS Monterey 12.6.3, macOS Ventura 13.2, watchOS 9.3, macOS Big Sur 11.7.3, Safari 16.3, tvOS 16.3, iOS 16.3 and iPadOS 16.3. Processing maliciously crafted web content may lead to arbitrary code execution.

For the stable distribution (bullseye), these problems have been fixed in version 2.38.4-1~deb11u1.

We recommend that you upgrade your wpewebkit packages. CVE-2023-23499: Wojciech Reguła (@_r3ggi) of SecuRing (wojciechregula.blog)

curl Available for: macOS Big Sur Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.85.0. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2023-01-23-7 watchOS 9.3

watchOS 9.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213599.

AppleMobileFileIntegrity Available for: Apple Watch Series 4 and later Impact: An app may be able to access user-sensitive data Description: This issue was addressed by enabling hardened runtime. CVE-2023-23499: Wojciech Regula of SecuRing (wojciechregula.blog)

ImageIO Available for: Apple Watch Series 4 and later Impact: Processing an image may lead to a denial-of-service Description: A memory corruption issue was addressed with improved state management. CVE-2023-23519: Yiğit Can YILMAZ (@yilmazcanyigit)

Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to leak sensitive kernel state Description: The issue was addressed with improved memory handling. CVE-2023-23500: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. Ltd. (@starlabs_sg)

Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to determine kernel memory layout Description: An information disclosure issue was addressed by removing the vulnerable code. CVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. Ltd. (@starlabs_sg)

Kernel Available for: Apple Watch Series 4 and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2023-23504: Adam Doupé of ASU SEFCOM

Maps Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: A logic issue was addressed with improved state management. CVE-2023-23503: an anonymous researcher

Safari Available for: Apple Watch Series 4 and later Impact: Visiting a website may lead to an app denial-of-service Description: The issue was addressed with improved handling of caches. CVE-2023-23512: Adriatik Raci

Screen Time Available for: Apple Watch Series 4 and later Impact: An app may be able to access information about a user’s contacts Description: A privacy issue was addressed with improved private data redaction for log entries. CVE-2023-23505: Wojciech Reguła of SecuRing (wojciechregula.blog)

Weather Available for: Apple Watch Series 4 and later Impact: An app may be able to bypass Privacy preferences Description: The issue was addressed with improved memory handling. WebKit Bugzilla: 245464 CVE-2023-23496: ChengGang Wu, Yan Kang, YuHao Hu, Yue Sun, Jiming Wang, JiKai Ren and Hang Shu of Institute of Computing Technology, Chinese Academy of Sciences

WebKit Available for: Apple Watch Series 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: The issue was addressed with improved memory handling. WebKit Bugzilla: 248268 CVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE WebKit Bugzilla: 248268 CVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park (@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung), JunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE

Additional recognition

Kernel We would like to acknowledge Nick Stenning of Replicate for their assistance.

WebKit We would like to acknowledge Eliya Stein of Confiant for their assistance.

Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmPPImAACgkQ4RjMIDke NxlPXQ//eXfTfjIg6Y/1b0u3+Ht29Qjn7kw6Gh296lh6jlGatQ8zXyk1dGl6MKcp ZTc7DFfL1VUN6MovOqW5qcR+MIV6hDiUd54ncDgjCXdHrtTG+bYchX5CJf5IIb67 gZP/2bBt4PQ+PHm3KqXPp7QauJWYD1d7AHChwqEbYchHxvgedB7Pu6nJvG3bnFmh 8ny/xrFEhtIDahw4MbicvK847aVpXyH6NxEoRY+8b9/4VocttfUPwMkGZTkVt/tz 9qfmKgjWpX2mTP9iaLlZdCUV/I4HcjTW0/nkDoaTBVDLW96DSeIo4nMM3qkcygRl TPVlvm+3Nenib1b6PZ71B26IJbmGdwR02SEpUPDDXbTGZeWmcyXe7ncvwSIbcGRI sPGMq6mEPi+rKTXKZeqPSDFnUlZJna2aNg9fPL9AZ1gwfNSbuhh5ZKQ9AAWA+k51 4QtoReAKUXinl8vr7BNVQSJSiZLMdgph4nCTYk1RA/VHDPjwaAJehDFUqKKKTuvp h59J8OSw0HaWP2NcMEglO4/EXj09E3gfveQ74KtG+eDbBMKa+RArIgOZZaDOk2F1 6Fs316bNBI9tMxP34gFEvexTTBuQpoR/76pSQajlaSdas5Jeub2QeJVHkBPMdatD HBbjpZhu4JcYqDVSDt58Ra5IsaM+OLkhvvCfi+UYxOiyZis3gn8= =/vLS -----END PGP SIGNATURE-----

.

Safari 16.3 may be obtained from the Mac App Store. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202305-32


                                       https://security.gentoo.org/

Severity: High Title: WebKitGTK+: Multiple Vulnerabilities Date: May 30, 2023 Bugs: #871732, #879571, #888563, #905346, #905349, #905351 ID: 202305-32


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in arbitrary code execution.

Affected packages

Package Vulnerable Unaffected


net-libs/webkit-gtk < 2.40.1 >= 2.40.1

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebKitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.40.1"

References

[ 1 ] CVE-2022-32885 https://nvd.nist.gov/vuln/detail/CVE-2022-32885 [ 2 ] CVE-2022-32886 https://nvd.nist.gov/vuln/detail/CVE-2022-32886 [ 3 ] CVE-2022-32888 https://nvd.nist.gov/vuln/detail/CVE-2022-32888 [ 4 ] CVE-2022-32891 https://nvd.nist.gov/vuln/detail/CVE-2022-32891 [ 5 ] CVE-2022-32923 https://nvd.nist.gov/vuln/detail/CVE-2022-32923 [ 6 ] CVE-2022-42799 https://nvd.nist.gov/vuln/detail/CVE-2022-42799 [ 7 ] CVE-2022-42823 https://nvd.nist.gov/vuln/detail/CVE-2022-42823 [ 8 ] CVE-2022-42824 https://nvd.nist.gov/vuln/detail/CVE-2022-42824 [ 9 ] CVE-2022-42826 https://nvd.nist.gov/vuln/detail/CVE-2022-42826 [ 10 ] CVE-2022-42852 https://nvd.nist.gov/vuln/detail/CVE-2022-42852 [ 11 ] CVE-2022-42856 https://nvd.nist.gov/vuln/detail/CVE-2022-42856 [ 12 ] CVE-2022-42863 https://nvd.nist.gov/vuln/detail/CVE-2022-42863 [ 13 ] CVE-2022-42867 https://nvd.nist.gov/vuln/detail/CVE-2022-42867 [ 14 ] CVE-2022-46691 https://nvd.nist.gov/vuln/detail/CVE-2022-46691 [ 15 ] CVE-2022-46692 https://nvd.nist.gov/vuln/detail/CVE-2022-46692 [ 16 ] CVE-2022-46698 https://nvd.nist.gov/vuln/detail/CVE-2022-46698 [ 17 ] CVE-2022-46699 https://nvd.nist.gov/vuln/detail/CVE-2022-46699 [ 18 ] CVE-2022-46700 https://nvd.nist.gov/vuln/detail/CVE-2022-46700 [ 19 ] CVE-2023-23517 https://nvd.nist.gov/vuln/detail/CVE-2023-23517 [ 20 ] CVE-2023-23518 https://nvd.nist.gov/vuln/detail/CVE-2023-23518 [ 21 ] CVE-2023-23529 https://nvd.nist.gov/vuln/detail/CVE-2023-23529 [ 22 ] CVE-2023-25358 https://nvd.nist.gov/vuln/detail/CVE-2023-25358 [ 23 ] CVE-2023-25360 https://nvd.nist.gov/vuln/detail/CVE-2023-25360 [ 24 ] CVE-2023-25361 https://nvd.nist.gov/vuln/detail/CVE-2023-25361 [ 25 ] CVE-2023-25362 https://nvd.nist.gov/vuln/detail/CVE-2023-25362 [ 26 ] CVE-2023-25363 https://nvd.nist.gov/vuln/detail/CVE-2023-25363 [ 27 ] CVE-2023-27932 https://nvd.nist.gov/vuln/detail/CVE-2023-27932 [ 28 ] CVE-2023-27954 https://nvd.nist.gov/vuln/detail/CVE-2023-27954 [ 29 ] CVE-2023-28205 https://nvd.nist.gov/vuln/detail/CVE-2023-28205 [ 30 ] WSA-2022-0009 https://webkitgtk.org/security/WSA-2022-0009.html [ 31 ] WSA-2022-0010 https://webkitgtk.org/security/WSA-2022-0010.html [ 32 ] WSA-2023-0001 https://webkitgtk.org/security/WSA-2023-0001.html [ 33 ] WSA-2023-0002 https://webkitgtk.org/security/WSA-2023-0002.html [ 34 ] WSA-2023-0003 https://webkitgtk.org/security/WSA-2023-0003.html

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202305-32

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: webkit2gtk3 security and bug fix update Advisory ID: RHSA-2023:2256-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:2256 Issue date: 2023-05-09 CVE Names: CVE-2022-32886 CVE-2022-32888 CVE-2022-32923 CVE-2022-42799 CVE-2022-42823 CVE-2022-42824 CVE-2022-42826 CVE-2022-42852 CVE-2022-42863 CVE-2022-42867 CVE-2022-46691 CVE-2022-46692 CVE-2022-46698 CVE-2022-46699 CVE-2022-46700 CVE-2023-23517 CVE-2023-23518 CVE-2023-25358 CVE-2023-25360 CVE-2023-25361 CVE-2023-25362 CVE-2023-25363 ==================================================================== 1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 9.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.2 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 9):

Source: webkit2gtk3-2.38.5-1.el9.src.rpm

aarch64: webkit2gtk3-2.38.5-1.el9.aarch64.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.aarch64.rpm webkit2gtk3-debugsource-2.38.5-1.el9.aarch64.rpm webkit2gtk3-devel-2.38.5-1.el9.aarch64.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.aarch64.rpm

ppc64le: webkit2gtk3-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-debugsource-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-devel-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm

s390x: webkit2gtk3-2.38.5-1.el9.s390x.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.s390x.rpm webkit2gtk3-debugsource-2.38.5-1.el9.s390x.rpm webkit2gtk3-devel-2.38.5-1.el9.s390x.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.s390x.rpm

x86_64: webkit2gtk3-2.38.5-1.el9.i686.rpm webkit2gtk3-2.38.5-1.el9.x86_64.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.x86_64.rpm webkit2gtk3-debugsource-2.38.5-1.el9.i686.rpm webkit2gtk3-debugsource-2.38.5-1.el9.x86_64.rpm webkit2gtk3-devel-2.38.5-1.el9.i686.rpm webkit2gtk3-devel-2.38.5-1.el9.x86_64.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-32886 https://access.redhat.com/security/cve/CVE-2022-32888 https://access.redhat.com/security/cve/CVE-2022-32923 https://access.redhat.com/security/cve/CVE-2022-42799 https://access.redhat.com/security/cve/CVE-2022-42823 https://access.redhat.com/security/cve/CVE-2022-42824 https://access.redhat.com/security/cve/CVE-2022-42826 https://access.redhat.com/security/cve/CVE-2022-42852 https://access.redhat.com/security/cve/CVE-2022-42863 https://access.redhat.com/security/cve/CVE-2022-42867 https://access.redhat.com/security/cve/CVE-2022-46691 https://access.redhat.com/security/cve/CVE-2022-46692 https://access.redhat.com/security/cve/CVE-2022-46698 https://access.redhat.com/security/cve/CVE-2022-46699 https://access.redhat.com/security/cve/CVE-2022-46700 https://access.redhat.com/security/cve/CVE-2023-23517 https://access.redhat.com/security/cve/CVE-2023-23518 https://access.redhat.com/security/cve/CVE-2023-25358 https://access.redhat.com/security/cve/CVE-2023-25360 https://access.redhat.com/security/cve/CVE-2023-25361 https://access.redhat.com/security/cve/CVE-2023-25362 https://access.redhat.com/security/cve/CVE-2023-25363 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2023 Red Hat, Inc

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202301-1703",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.3"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.3"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.3"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.3"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.7.3"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.6.3"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "9.3"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "170698"
      },
      {
        "db": "PACKETSTORM",
        "id": "170699"
      },
      {
        "db": "PACKETSTORM",
        "id": "170700"
      },
      {
        "db": "PACKETSTORM",
        "id": "170764"
      }
    ],
    "trust": 0.4
  },
  "cve": "CVE-2023-23517",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2023-23517",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2023-23517",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2023-23517",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202301-1778",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The issue was addressed with improved memory handling. This issue is fixed in macOS Monterey 12.6.3, macOS Ventura 13.2, watchOS 9.3, macOS Big Sur 11.7.3, Safari 16.3, tvOS 16.3, iOS 16.3 and iPadOS 16.3. Processing maliciously crafted web content may lead to arbitrary code execution. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.38.4-1~deb11u1. \n\nWe recommend that you upgrade your wpewebkit packages. \nCVE-2023-23499: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n(wojciechregula.blog)\n\ncurl\nAvailable for: macOS Big Sur\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.85.0. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2023-01-23-7 watchOS 9.3\n\nwatchOS 9.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213599. \n\nAppleMobileFileIntegrity\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to access user-sensitive data\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2023-23499: Wojciech Regula of SecuRing (wojciechregula.blog)\n\nImageIO\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing an image may lead to a denial-of-service\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2023-23519: Yi\u011fit Can YILMAZ (@yilmazcanyigit)\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to leak sensitive kernel state\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23500: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. \nLtd. (@starlabs_sg)\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to determine kernel memory layout\nDescription: An information disclosure issue was addressed by\nremoving the vulnerable code. \nCVE-2023-23502: Pan ZhenPeng (@Peterpan0927) of STAR Labs SG Pte. \nLtd. (@starlabs_sg)\n\nKernel\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2023-23504: Adam Doup\u00e9 of ASU SEFCOM\n\nMaps\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2023-23503: an anonymous researcher\n\nSafari\nAvailable for: Apple Watch Series 4 and later\nImpact: Visiting a website may lead to an app denial-of-service\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2023-23512: Adriatik Raci\n\nScreen Time\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to access information about a user\u2019s\ncontacts\nDescription: A privacy issue was addressed with improved private data\nredaction for log entries. \nCVE-2023-23505: Wojciech Regu\u0142a of SecuRing (wojciechregula.blog)\n\nWeather\nAvailable for: Apple Watch Series 4 and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: The issue was addressed with improved memory handling. \nWebKit Bugzilla: 245464\nCVE-2023-23496: ChengGang Wu, Yan Kang, YuHao Hu, Yue Sun, Jiming\nWang, JiKai Ren and Hang Shu of Institute of Computing Technology,\nChinese Academy of Sciences\n\nWebKit\nAvailable for: Apple Watch Series 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: The issue was addressed with improved memory handling. \nWebKit Bugzilla: 248268\nCVE-2023-23518: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\nWebKit Bugzilla: 248268\nCVE-2023-23517: YeongHyeon Choi (@hyeon101010), Hyeon Park\n(@tree_segment), SeOk JEON (@_seokjeon), YoungSung Ahn (@_ZeroSung),\nJunSeo Bae (@snakebjs0107), Dohyun Lee (@l33d0hyun) of Team ApplePIE\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Nick Stenning of Replicate for their\nassistance. \n\nWebKit\nWe would like to acknowledge Eliya Stein of Confiant for their\nassistance. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641  To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\".  Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmPPImAACgkQ4RjMIDke\nNxlPXQ//eXfTfjIg6Y/1b0u3+Ht29Qjn7kw6Gh296lh6jlGatQ8zXyk1dGl6MKcp\nZTc7DFfL1VUN6MovOqW5qcR+MIV6hDiUd54ncDgjCXdHrtTG+bYchX5CJf5IIb67\ngZP/2bBt4PQ+PHm3KqXPp7QauJWYD1d7AHChwqEbYchHxvgedB7Pu6nJvG3bnFmh\n8ny/xrFEhtIDahw4MbicvK847aVpXyH6NxEoRY+8b9/4VocttfUPwMkGZTkVt/tz\n9qfmKgjWpX2mTP9iaLlZdCUV/I4HcjTW0/nkDoaTBVDLW96DSeIo4nMM3qkcygRl\nTPVlvm+3Nenib1b6PZ71B26IJbmGdwR02SEpUPDDXbTGZeWmcyXe7ncvwSIbcGRI\nsPGMq6mEPi+rKTXKZeqPSDFnUlZJna2aNg9fPL9AZ1gwfNSbuhh5ZKQ9AAWA+k51\n4QtoReAKUXinl8vr7BNVQSJSiZLMdgph4nCTYk1RA/VHDPjwaAJehDFUqKKKTuvp\nh59J8OSw0HaWP2NcMEglO4/EXj09E3gfveQ74KtG+eDbBMKa+RArIgOZZaDOk2F1\n6Fs316bNBI9tMxP34gFEvexTTBuQpoR/76pSQajlaSdas5Jeub2QeJVHkBPMdatD\nHBbjpZhu4JcYqDVSDt58Ra5IsaM+OLkhvvCfi+UYxOiyZis3gn8=\n=/vLS\n-----END PGP SIGNATURE-----\n\n\n. \n\nSafari 16.3 may be obtained from the Mac App Store. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202305-32\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebKitGTK+: Multiple Vulnerabilities\n     Date: May 30, 2023\n     Bugs: #871732, #879571, #888563, #905346, #905349, #905351\n       ID: 202305-32\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in arbitrary code execution. \n\nAffected packages\n================\nPackage              Vulnerable    Unaffected\n-------------------  ------------  ------------\nnet-libs/webkit-gtk  \u003c 2.40.1      \u003e= 2.40.1\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebKitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.40.1\"\n\nReferences\n=========\n[ 1 ] CVE-2022-32885\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32885\n[ 2 ] CVE-2022-32886\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32886\n[ 3 ] CVE-2022-32888\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32888\n[ 4 ] CVE-2022-32891\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32891\n[ 5 ] CVE-2022-32923\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32923\n[ 6 ] CVE-2022-42799\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42799\n[ 7 ] CVE-2022-42823\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42823\n[ 8 ] CVE-2022-42824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42824\n[ 9 ] CVE-2022-42826\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42826\n[ 10 ] CVE-2022-42852\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42852\n[ 11 ] CVE-2022-42856\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42856\n[ 12 ] CVE-2022-42863\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42863\n[ 13 ] CVE-2022-42867\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42867\n[ 14 ] CVE-2022-46691\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46691\n[ 15 ] CVE-2022-46692\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46692\n[ 16 ] CVE-2022-46698\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46698\n[ 17 ] CVE-2022-46699\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46699\n[ 18 ] CVE-2022-46700\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46700\n[ 19 ] CVE-2023-23517\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23517\n[ 20 ] CVE-2023-23518\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23518\n[ 21 ] CVE-2023-23529\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23529\n[ 22 ] CVE-2023-25358\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25358\n[ 23 ] CVE-2023-25360\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25360\n[ 24 ] CVE-2023-25361\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25361\n[ 25 ] CVE-2023-25362\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25362\n[ 26 ] CVE-2023-25363\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25363\n[ 27 ] CVE-2023-27932\n      https://nvd.nist.gov/vuln/detail/CVE-2023-27932\n[ 28 ] CVE-2023-27954\n      https://nvd.nist.gov/vuln/detail/CVE-2023-27954\n[ 29 ] CVE-2023-28205\n      https://nvd.nist.gov/vuln/detail/CVE-2023-28205\n[ 30 ] WSA-2022-0009\n      https://webkitgtk.org/security/WSA-2022-0009.html\n[ 31 ] WSA-2022-0010\n      https://webkitgtk.org/security/WSA-2022-0010.html\n[ 32 ] WSA-2023-0001\n      https://webkitgtk.org/security/WSA-2023-0001.html\n[ 33 ] WSA-2023-0002\n      https://webkitgtk.org/security/WSA-2023-0002.html\n[ 34 ] WSA-2023-0003\n      https://webkitgtk.org/security/WSA-2023-0003.html\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-32\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: webkit2gtk3 security and bug fix update\nAdvisory ID:       RHSA-2023:2256-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2023:2256\nIssue date:        2023-05-09\nCVE Names:         CVE-2022-32886 CVE-2022-32888 CVE-2022-32923\n                   CVE-2022-42799 CVE-2022-42823 CVE-2022-42824\n                   CVE-2022-42826 CVE-2022-42852 CVE-2022-42863\n                   CVE-2022-42867 CVE-2022-46691 CVE-2022-46692\n                   CVE-2022-46698 CVE-2022-46699 CVE-2022-46700\n                   CVE-2023-23517 CVE-2023-23518 CVE-2023-25358\n                   CVE-2023-25360 CVE-2023-25361 CVE-2023-25362\n                   CVE-2023-25363\n====================================================================\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.2 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\nSource:\nwebkit2gtk3-2.38.5-1.el9.src.rpm\n\naarch64:\nwebkit2gtk3-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32886\nhttps://access.redhat.com/security/cve/CVE-2022-32888\nhttps://access.redhat.com/security/cve/CVE-2022-32923\nhttps://access.redhat.com/security/cve/CVE-2022-42799\nhttps://access.redhat.com/security/cve/CVE-2022-42823\nhttps://access.redhat.com/security/cve/CVE-2022-42824\nhttps://access.redhat.com/security/cve/CVE-2022-42826\nhttps://access.redhat.com/security/cve/CVE-2022-42852\nhttps://access.redhat.com/security/cve/CVE-2022-42863\nhttps://access.redhat.com/security/cve/CVE-2022-42867\nhttps://access.redhat.com/security/cve/CVE-2022-46691\nhttps://access.redhat.com/security/cve/CVE-2022-46692\nhttps://access.redhat.com/security/cve/CVE-2022-46698\nhttps://access.redhat.com/security/cve/CVE-2022-46699\nhttps://access.redhat.com/security/cve/CVE-2022-46700\nhttps://access.redhat.com/security/cve/CVE-2023-23517\nhttps://access.redhat.com/security/cve/CVE-2023-23518\nhttps://access.redhat.com/security/cve/CVE-2023-25358\nhttps://access.redhat.com/security/cve/CVE-2023-25360\nhttps://access.redhat.com/security/cve/CVE-2023-25361\nhttps://access.redhat.com/security/cve/CVE-2023-25362\nhttps://access.redhat.com/security/cve/CVE-2023-25363\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      },
      {
        "db": "VULHUB",
        "id": "VHN-451828"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-23517"
      },
      {
        "db": "PACKETSTORM",
        "id": "170883"
      },
      {
        "db": "PACKETSTORM",
        "id": "170698"
      },
      {
        "db": "PACKETSTORM",
        "id": "170699"
      },
      {
        "db": "PACKETSTORM",
        "id": "170700"
      },
      {
        "db": "PACKETSTORM",
        "id": "170764"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      }
    ],
    "trust": 1.71
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-23517",
        "trust": 2.5
      },
      {
        "db": "PACKETSTORM",
        "id": "170764",
        "trust": 0.8
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0847",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1216",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1322",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "170883",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170879",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170693",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-451828",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-23517",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170698",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170699",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170700",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172625",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172241",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-451828"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-23517"
      },
      {
        "db": "PACKETSTORM",
        "id": "170883"
      },
      {
        "db": "PACKETSTORM",
        "id": "170698"
      },
      {
        "db": "PACKETSTORM",
        "id": "170699"
      },
      {
        "db": "PACKETSTORM",
        "id": "170700"
      },
      {
        "db": "PACKETSTORM",
        "id": "170764"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "id": "VAR-202301-1703",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-451828"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:37:13.194000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Apple tvOS Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=222823"
      },
      {
        "title": "Debian Security Advisories: DSA-5341-1 wpewebkit -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=5e70abce1aa7123c9afa5abe0f161b39"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2023-23517"
      },
      {
        "title": "Debian Security Advisories: DSA-5340-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b49a70b5a07d35b346baa401a02d0f5e"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2023-23517"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://support.apple.com/en-us/ht213601"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213599"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213600"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213603"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213604"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213605"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213606"
      },
      {
        "trust": 1.0,
        "url": "https://support.apple.com/en-us/ht213638"
      },
      {
        "trust": 0.7,
        "url": "https://security.gentoo.org/glsa/202305-32"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23517"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23518"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0847"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1216"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1322"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2023-23517/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170764/apple-security-advisory-2023-01-24-1.html"
      },
      {
        "trust": 0.4,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.4,
        "url": "https://support.apple.com/en-us/ht201222."
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42826"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23499"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23496"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2023-23517"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23505"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23503"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23512"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23504"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23511"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23519"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23502"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23500"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46698"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32888"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46692"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42799"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42824"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46691"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42823"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46699"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32923"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/2023/dsa-5341"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/wpewebkit"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/downloads/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23497"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23508"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213603."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23513"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213599."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213600."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213601."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32891"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0010.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0001.html"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0009.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0003.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32885"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27932"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46700"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25361"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25360"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42856"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-28205"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-23518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46692"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:2256"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46699"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25360"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42867"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42863"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46700"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42826"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-451828"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-23517"
      },
      {
        "db": "PACKETSTORM",
        "id": "170883"
      },
      {
        "db": "PACKETSTORM",
        "id": "170698"
      },
      {
        "db": "PACKETSTORM",
        "id": "170699"
      },
      {
        "db": "PACKETSTORM",
        "id": "170700"
      },
      {
        "db": "PACKETSTORM",
        "id": "170764"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-451828"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-23517"
      },
      {
        "db": "PACKETSTORM",
        "id": "170883"
      },
      {
        "db": "PACKETSTORM",
        "id": "170698"
      },
      {
        "db": "PACKETSTORM",
        "id": "170699"
      },
      {
        "db": "PACKETSTORM",
        "id": "170700"
      },
      {
        "db": "PACKETSTORM",
        "id": "170764"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-02-27T00:00:00",
        "db": "VULHUB",
        "id": "VHN-451828"
      },
      {
        "date": "2023-02-07T17:25:33",
        "db": "PACKETSTORM",
        "id": "170883"
      },
      {
        "date": "2023-01-24T16:41:28",
        "db": "PACKETSTORM",
        "id": "170698"
      },
      {
        "date": "2023-01-24T16:41:48",
        "db": "PACKETSTORM",
        "id": "170699"
      },
      {
        "date": "2023-01-24T16:41:58",
        "db": "PACKETSTORM",
        "id": "170700"
      },
      {
        "date": "2023-01-27T15:06:30",
        "db": "PACKETSTORM",
        "id": "170764"
      },
      {
        "date": "2023-05-30T16:32:33",
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "date": "2023-05-09T15:24:16",
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "date": "2023-01-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      },
      {
        "date": "2023-02-27T20:15:14.320000",
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-03-08T00:00:00",
        "db": "VULHUB",
        "id": "VHN-451828"
      },
      {
        "date": "2023-05-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      },
      {
        "date": "2025-03-11T16:15:13.490000",
        "db": "NVD",
        "id": "CVE-2023-23517"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple tvOS Security hole",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-1778"
      }
    ],
    "trust": 0.6
  }
}

VAR-201912-1850

Vulnerability from variot - Updated: 2025-12-22 23:35

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple watchOS prior to 6.1; Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes 12.10 .2 version; versions prior to iCloud 7.15 based on the Windows platform. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Description:

Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

The compliance-operator image updates are now available for OpenShift Container Platform 4.6.

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Solution:

For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-29-3 tvOS 13.2

tvOS 13.2 is now available and addresses the following:

Accounts Available for: Apple TV 4K and Apple TV HD Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at Technische Universität Darmstadt

App Store Available for: Apple TV 4K and Apple TV HD Impact: A local attacker may be able to login to the account of a previously logged in user without valid credentials. CVE-2019-8803: Kiyeon An, 차민규 (CHA Minkyu)

Audio Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8785: Ian Beer of Google Project Zero CVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure

AVEVideoEncoder Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure

File System Events Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8798: ABC Research s.r.o. working with Trend Micro's Zero Day Initiative

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to read restricted memory Description: A validation issue was addressed with improved input sanitization. CVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8782: Cheolung Lee of LINE+ Security Team CVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team CVE-2019-8808: found by OSS-Fuzz CVE-2019-8811: Soyeon Park of SSLab at Georgia Tech CVE-2019-8812: an anonymous researcher CVE-2019-8814: Cheolung Lee of LINE+ Security Team CVE-2019-8816: Soyeon Park of SSLab at Georgia Tech CVE-2019-8819: Cheolung Lee of LINE+ Security Team CVE-2019-8820: Samuel Groß of Google Project Zero CVE-2019-8821: Sergei Glazunov of Google Project Zero CVE-2019-8822: Sergei Glazunov of Google Project Zero CVE-2019-8823: Sergei Glazunov of Google Project Zero

WebKit Process Model Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2019-8815: Apple

Additional recognition

CFNetwork We would like to acknowledge Lily Chen of Google for their assistance.

Kernel We would like to acknowledge Jann Horn of Google Project Zero for their assistance.

WebKit We would like to acknowledge Dlive of Tencent's Xuanwu Lab and Zhiyi Zhang of Codesafe Team of Legendsec at Qi'anxin Group for their assistance.

Installation note:

Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software."

To check the current version of software, select "Settings -> General -> About."

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt c2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2DIQ/9 FQmnN+1/tdXaFFI1PtdJ9hgXONcdsi+D05mREDTX7v0VaLzChX/N3DccI00Z1uT5 VNKHRjInGYDZoO/UntzAWoZa+tcueaY23XhN9xTYrUlt1Ol1gIsaxTEgPtax4B9A PoqWb6S+oK1SHUxglGnlLtXkcyt3WHJ5iqan7BM9XX6dsriwgoBgKADpFi3FCXoa cFIvpoM6ZhxYyMPpxmMc1IRwgjDwOn2miyjkSaAONXw5R5YGRxSsjq+HkzYE3w1m m2NZElUB1nRmlyuU3aMsHUTxwAnfzryPiHRGUTcNZao39YBsyWz56sr3++g7qmnD uZZzBnISQpC6oJCWclw3UHcKHH+V0+1q059GHBoku6Xmkc5bPRnKdFgSf5OvyQUw XGjwL5UbGB5eTtdj/Kx5Rd/m5fFIUxVu7HB3bGQGhYHIc9iTdi9j3mCd3nOHCIEj Re07c084jl2Git4sH2Tva7tOqFyI2IyNVJ0LjBXO54fAC2mtFz3mkDFxCEzL5V92 O/Wct2T6OpYghzkrOOlUEAQJTbwJjTZBWsUubcOoJo6P9JUPBDJKB0ibAaCWrt9I 8OU5wRr3q0fTA3N/qdGGbQ/tgUwiMHGuqrHMv0XYGPfO5Qg5GuHpTYchZrP5nFwf ziQuQtO92b1FA4sDI+ue1sDIG84tPrkTAeLmveBjezc= =KsmX -----END PGP SIGNATURE-----

. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. 1891285 - Common templates and kubevirt-config cm - update machine-type 1891440 - [v2v][VMware to CNV VM import API]Source VM with no network interface fail with unclear error 1892227 - [SSP] cluster scoped resources are not being reconciled 1893278 - openshift-virtualization-os-images namespace not seen by user 1893646 - [HCO] Pod placement configuration - dry run is not performed for all the configuration stanza 1894428 - Message for VMI not migratable is not clear enough 1894824 - [v2v][VM import] Pick the smallest template for the imported VM, and not always Medium 1894897 - [v2v][VMIO] VMimport CR is not reported as failed when target VM is deleted during the import 1895414 - Virt-operator is accepting updates to the placement of its workload components even with running VMs 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1898072 - Add Fedora33 to Fedora common templates 1898840 - [v2v] VM import VMWare to CNV Import 63 chars vm name should not fail 1899558 - CNV 2.6 - nmstate fails to set state 1901480 - VM disk io can't worked if namespace have label kubemacpool 1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig) 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1903014 - hco-webhook pod in CreateContainerError 1903585 - [v2v] Windows 2012 VM imported from RHV goes into Windows repair mode 1904797 - [VMIO][vmware] A migrated RHEL/Windows VM starts in emergency mode/safe mode when target storage is NFS and target namespace is NOT "default" 1906199 - [CNV-2.5] CNV Tries to Install on Windows Workers 1907151 - kubevirt version is not reported correctly via virtctl 1907352 - VM/VMI link changes to kubevirt.io~v1~VirtualMachineInstance on CNV 2.6 1907691 - [CNV] Configuring NodeNetworkConfigurationPolicy caused "Internal error occurred" for creating datavolume 1907988 - VM loses dynamic IP address of its default interface after migration 1908363 - Applying NodeNetworkConfigurationPolicy for different NIC than default disables br-ex bridge and nodes lose connectivity 1908421 - [v2v] [VM import RHV to CNV] Windows imported VM boot failed: INACCESSIBLE BOOT DEVICE error 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1909458 - [V2V][VMware to CNV VM import via api using VMIO] VM import to Ceph RBD/BLOCK fails on "qemu-img: /data/disk.img" error 1910857 - Provide a mechanism to enable the HotplugVolumes feature gate via HCO 1911118 - Windows VMI LiveMigration / shutdown fails on 'XML error: non unique alias detected: ua-') 1911396 - Set networkInterfaceMultiqueue false in rhel 6 template for e1000e interface 1911662 - el6 guests don't work properly if virtio bus is specified on various devices 1912908 - Allow using "scsi" bus for disks in template validation 1913248 - Creating vlan interface on top of a bond device via NodeNetworkConfigurationPolicy fails 1913320 - Informative message needed with virtctl image-upload, that additional step is needed from the user 1913717 - Users should have read permitions for golden images data volumes 1913756 - Migrating to Ceph-RBD + Block fails when skipping zeroes 1914177 - CNV does not preallocate blank file data volumes 1914608 - Obsolete CPU models (kubevirt-cpu-plugin-configmap) are set on worker nodes 1914947 - HPP golden images - DV shoudld not be created with WaitForFirstConsumer 1917908 - [VMIO] vmimport pod fail to create when using ceph-rbd/block 1917963 - [CNV 2.6] Unable to install CNV disconnected - requires kvm-info-nfd-plugin which is not mirrored 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1920576 - HCO can report ready=true when it failed to create a CR for a component operator 1920610 - e2e-aws-4.7-cnv consistently failing on Hyperconverged Cluster Operator 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923979 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool 1927373 - NoExecute taint violates pdb; VMIs are not live migrated 1931376 - VMs disconnected from nmstate-defined bridge after CNV-2.5.4->CNV-2.6.0 upgrade

Installation note:

Safari 13.0.3 may be obtained from the Mac App Store. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: webkitgtk4 security, bug fix, and enhancement update Advisory ID: RHSA-2020:4035-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:4035 Issue date: 2020-09-29 CVE Names: CVE-2019-6237 CVE-2019-6251 CVE-2019-8506 CVE-2019-8524 CVE-2019-8535 CVE-2019-8536 CVE-2019-8544 CVE-2019-8551 CVE-2019-8558 CVE-2019-8559 CVE-2019-8563 CVE-2019-8571 CVE-2019-8583 CVE-2019-8584 CVE-2019-8586 CVE-2019-8587 CVE-2019-8594 CVE-2019-8595 CVE-2019-8596 CVE-2019-8597 CVE-2019-8601 CVE-2019-8607 CVE-2019-8608 CVE-2019-8609 CVE-2019-8610 CVE-2019-8611 CVE-2019-8615 CVE-2019-8619 CVE-2019-8622 CVE-2019-8623 CVE-2019-8625 CVE-2019-8644 CVE-2019-8649 CVE-2019-8658 CVE-2019-8666 CVE-2019-8669 CVE-2019-8671 CVE-2019-8672 CVE-2019-8673 CVE-2019-8674 CVE-2019-8676 CVE-2019-8677 CVE-2019-8678 CVE-2019-8679 CVE-2019-8680 CVE-2019-8681 CVE-2019-8683 CVE-2019-8684 CVE-2019-8686 CVE-2019-8687 CVE-2019-8688 CVE-2019-8689 CVE-2019-8690 CVE-2019-8707 CVE-2019-8710 CVE-2019-8719 CVE-2019-8720 CVE-2019-8726 CVE-2019-8733 CVE-2019-8735 CVE-2019-8743 CVE-2019-8763 CVE-2019-8764 CVE-2019-8765 CVE-2019-8766 CVE-2019-8768 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8821 CVE-2019-8822 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-11070 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-10018 CVE-2020-11793 ==================================================================== 1. Summary:

An update for webkitgtk4 is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2). (BZ#1817144)

Security Fix(es):

  • webkitgtk: Multiple security issues (CVE-2019-6237, CVE-2019-6251, CVE-2019-8506, CVE-2019-8524, CVE-2019-8535, CVE-2019-8536, CVE-2019-8544, CVE-2019-8551, CVE-2019-8558, CVE-2019-8559, CVE-2019-8563, CVE-2019-8571, CVE-2019-8583, CVE-2019-8584, CVE-2019-8586, CVE-2019-8587, CVE-2019-8594, CVE-2019-8595, CVE-2019-8596, CVE-2019-8597, CVE-2019-8601, CVE-2019-8607, CVE-2019-8608, CVE-2019-8609, CVE-2019-8610, CVE-2019-8611, CVE-2019-8615, CVE-2019-8619, CVE-2019-8622, CVE-2019-8623, CVE-2019-8625, CVE-2019-8644, CVE-2019-8649, CVE-2019-8658, CVE-2019-8666, CVE-2019-8669, CVE-2019-8671, CVE-2019-8672, CVE-2019-8673, CVE-2019-8674, CVE-2019-8676, CVE-2019-8677, CVE-2019-8678, CVE-2019-8679, CVE-2019-8680, CVE-2019-8681, CVE-2019-8683, CVE-2019-8684, CVE-2019-8686, CVE-2019-8687, CVE-2019-8688, CVE-2019-8689, CVE-2019-8690, CVE-2019-8707, CVE-2019-8710, CVE-2019-8719, CVE-2019-8720, CVE-2019-8726, CVE-2019-8733, CVE-2019-8735, CVE-2019-8743, CVE-2019-8763, CVE-2019-8764, CVE-2019-8765, CVE-2019-8766, CVE-2019-8768, CVE-2019-8769, CVE-2019-8771, CVE-2019-8782, CVE-2019-8783, CVE-2019-8808, CVE-2019-8811, CVE-2019-8812, CVE-2019-8813, CVE-2019-8814, CVE-2019-8815, CVE-2019-8816, CVE-2019-8819, CVE-2019-8820, CVE-2019-8821, CVE-2019-8822, CVE-2019-8823, CVE-2019-8835, CVE-2019-8844, CVE-2019-8846, CVE-2019-11070, CVE-2020-3862, CVE-2020-3864, CVE-2020-3865, CVE-2020-3867, CVE-2020-3868, CVE-2020-3885, CVE-2020-3894, CVE-2020-3895, CVE-2020-3897, CVE-2020-3899, CVE-2020-3900, CVE-2020-3901, CVE-2020-3902, CVE-2020-10018, CVE-2020-11793)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

These packages are GPG signed by Red Hat for security. References:

https://access.redhat.com/security/cve/CVE-2019-6237 https://access.redhat.com/security/cve/CVE-2019-6251 https://access.redhat.com/security/cve/CVE-2019-8506 https://access.redhat.com/security/cve/CVE-2019-8524 https://access.redhat.com/security/cve/CVE-2019-8535 https://access.redhat.com/security/cve/CVE-2019-8536 https://access.redhat.com/security/cve/CVE-2019-8544 https://access.redhat.com/security/cve/CVE-2019-8551 https://access.redhat.com/security/cve/CVE-2019-8558 https://access.redhat.com/security/cve/CVE-2019-8559 https://access.redhat.com/security/cve/CVE-2019-8563 https://access.redhat.com/security/cve/CVE-2019-8571 https://access.redhat.com/security/cve/CVE-2019-8583 https://access.redhat.com/security/cve/CVE-2019-8584 https://access.redhat.com/security/cve/CVE-2019-8586 https://access.redhat.com/security/cve/CVE-2019-8587 https://access.redhat.com/security/cve/CVE-2019-8594 https://access.redhat.com/security/cve/CVE-2019-8595 https://access.redhat.com/security/cve/CVE-2019-8596 https://access.redhat.com/security/cve/CVE-2019-8597 https://access.redhat.com/security/cve/CVE-2019-8601 https://access.redhat.com/security/cve/CVE-2019-8607 https://access.redhat.com/security/cve/CVE-2019-8608 https://access.redhat.com/security/cve/CVE-2019-8609 https://access.redhat.com/security/cve/CVE-2019-8610 https://access.redhat.com/security/cve/CVE-2019-8611 https://access.redhat.com/security/cve/CVE-2019-8615 https://access.redhat.com/security/cve/CVE-2019-8619 https://access.redhat.com/security/cve/CVE-2019-8622 https://access.redhat.com/security/cve/CVE-2019-8623 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8644 https://access.redhat.com/security/cve/CVE-2019-8649 https://access.redhat.com/security/cve/CVE-2019-8658 https://access.redhat.com/security/cve/CVE-2019-8666 https://access.redhat.com/security/cve/CVE-2019-8669 https://access.redhat.com/security/cve/CVE-2019-8671 https://access.redhat.com/security/cve/CVE-2019-8672 https://access.redhat.com/security/cve/CVE-2019-8673 https://access.redhat.com/security/cve/CVE-2019-8674 https://access.redhat.com/security/cve/CVE-2019-8676 https://access.redhat.com/security/cve/CVE-2019-8677 https://access.redhat.com/security/cve/CVE-2019-8678 https://access.redhat.com/security/cve/CVE-2019-8679 https://access.redhat.com/security/cve/CVE-2019-8680 https://access.redhat.com/security/cve/CVE-2019-8681 https://access.redhat.com/security/cve/CVE-2019-8683 https://access.redhat.com/security/cve/CVE-2019-8684 https://access.redhat.com/security/cve/CVE-2019-8686 https://access.redhat.com/security/cve/CVE-2019-8687 https://access.redhat.com/security/cve/CVE-2019-8688 https://access.redhat.com/security/cve/CVE-2019-8689 https://access.redhat.com/security/cve/CVE-2019-8690 https://access.redhat.com/security/cve/CVE-2019-8707 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8719 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8726 https://access.redhat.com/security/cve/CVE-2019-8733 https://access.redhat.com/security/cve/CVE-2019-8735 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8763 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8765 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8768 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8821 https://access.redhat.com/security/cve/CVE-2019-8822 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-11070 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBX3OjINzjgjWX9erEAQjqsg/9FnSEJ3umFx0gtnsZIVRP9YxMIVZhVQ8z rNnK/LGQWq1nPlNC5OF60WRcWA7cC74lh1jl/+xU6p+9JXTq9y9hQTd7Fcf+6T01 RYj2zJe6kGBY/53rhZJKCdb9zNXz1CkqsuvTPqVGIabUWTTlsBFnd6l4GK6QL4kM XVQZyWtmSfmLII4Ocdav9WocJzH6o1TbEo+O9Fm6WjdVOK+/+VzPki0/dW50CQAK R8u5tTXZR5m52RLmvhs/LTv3yUnmhEkhvrR0TtuR8KRfcP1/ytNwn3VidFefuAO1 PWrgpjIPWy/kbtZaZWK4fBblYj6bKCVD1SiBKQcOfCq0f16aqRP2niFoDXdAy467 eGu0JHkRsIRCLG2rY+JfOau5KtLRhRr0iRe5AhOVpAtUelzjAvEQEcVv4GmZXcwX rXfeagSjWzdo8Mf55d7pjORXAKhGdO3FQSeiCvzq9miZq3NBX4Jm4raobeskw/rJ 1ONqg4fE7Gv7rks8QOy5xErwI8Ut1TGJAgYOD8rmRptr05hBWQFJCfmoc4KpxsMe PJoRag0AZfYxYoMe5avMcGCYHosU63z3wS7gao9flj37NkEi6M134vGmCpPNmpGr w5HQly9SO3mD0a92xOUn42rrXq841ZkVu89fR6j9wBn8NAKLWH6eUjZkVMNmLRzh PKg+HFNkMjk=dS3G -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state


  1. Gentoo Linux Security Advisory GLSA 202003-22

                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1850",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.2"
      },
      {
        "model": "icloud",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.4"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "6.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.3"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.15"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2019-8811",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8811",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160246",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8811",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8811",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201910-1778",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160246",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8811",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple watchOS prior to 6.1; Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes 12.10 .2 version; versions prior to iCloud 7.15 based on the Windows platform. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThe compliance-operator image updates are now available for OpenShift\nContainer Platform 4.6. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-29-3 tvOS 13.2\n\ntvOS 13.2 is now available and addresses the following:\n\nAccounts\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at\nTechnische Universit\u00e4t Darmstadt\n\nApp Store\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A local attacker may be able to login to the account of a\npreviously logged in user without valid credentials. \nCVE-2019-8803: Kiyeon An, \ucc28\ubbfc\uaddc (CHA Minkyu)\n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8785: Ian Beer of Google Project Zero\nCVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure\n\nAVEVideoEncoder\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure\n\nFile System Events\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8798: ABC Research s.r.o. working with Trend Micro\u0027s Zero\nDay Initiative\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to read restricted memory\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8782: Cheolung Lee of LINE+ Security Team\nCVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team\nCVE-2019-8808: found by OSS-Fuzz\nCVE-2019-8811: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8812: an anonymous researcher\nCVE-2019-8814: Cheolung Lee of LINE+ Security Team\nCVE-2019-8816: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8819: Cheolung Lee of LINE+ Security Team\nCVE-2019-8820: Samuel Gro\u00df of Google Project Zero\nCVE-2019-8821: Sergei Glazunov of Google Project Zero\nCVE-2019-8822: Sergei Glazunov of Google Project Zero\nCVE-2019-8823: Sergei Glazunov of Google Project Zero\n\nWebKit Process Model\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2019-8815: Apple\n\nAdditional recognition\n\nCFNetwork\nWe would like to acknowledge Lily Chen of Google for their\nassistance. \n\nKernel\nWe would like to acknowledge Jann Horn of Google Project Zero for\ntheir assistance. \n\nWebKit\nWe would like to acknowledge Dlive of Tencent\u0027s Xuanwu Lab and Zhiyi\nZhang of Codesafe Team of Legendsec at Qi\u0027anxin Group for their\nassistance. \n\nInstallation note:\n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting\n\"Settings -\u003e System -\u003e Software Update -\u003e Update Software.\"\n\nTo check the current version of software, select\n\"Settings -\u003e General -\u003e About.\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt\nc2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2DIQ/9\nFQmnN+1/tdXaFFI1PtdJ9hgXONcdsi+D05mREDTX7v0VaLzChX/N3DccI00Z1uT5\nVNKHRjInGYDZoO/UntzAWoZa+tcueaY23XhN9xTYrUlt1Ol1gIsaxTEgPtax4B9A\nPoqWb6S+oK1SHUxglGnlLtXkcyt3WHJ5iqan7BM9XX6dsriwgoBgKADpFi3FCXoa\ncFIvpoM6ZhxYyMPpxmMc1IRwgjDwOn2miyjkSaAONXw5R5YGRxSsjq+HkzYE3w1m\nm2NZElUB1nRmlyuU3aMsHUTxwAnfzryPiHRGUTcNZao39YBsyWz56sr3++g7qmnD\nuZZzBnISQpC6oJCWclw3UHcKHH+V0+1q059GHBoku6Xmkc5bPRnKdFgSf5OvyQUw\nXGjwL5UbGB5eTtdj/Kx5Rd/m5fFIUxVu7HB3bGQGhYHIc9iTdi9j3mCd3nOHCIEj\nRe07c084jl2Git4sH2Tva7tOqFyI2IyNVJ0LjBXO54fAC2mtFz3mkDFxCEzL5V92\nO/Wct2T6OpYghzkrOOlUEAQJTbwJjTZBWsUubcOoJo6P9JUPBDJKB0ibAaCWrt9I\n8OU5wRr3q0fTA3N/qdGGbQ/tgUwiMHGuqrHMv0XYGPfO5Qg5GuHpTYchZrP5nFwf\nziQuQtO92b1FA4sDI+ue1sDIG84tPrkTAeLmveBjezc=\n=KsmX\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. \n1891285 - Common templates and kubevirt-config cm - update machine-type\n1891440 - [v2v][VMware to CNV VM import API]Source VM with no network interface fail with unclear error\n1892227 - [SSP] cluster scoped resources are not being reconciled\n1893278 - openshift-virtualization-os-images namespace not seen by user\n1893646 - [HCO] Pod placement configuration - dry run is not performed for all the configuration stanza\n1894428 - Message for VMI not migratable is not clear enough\n1894824 - [v2v][VM import] Pick the smallest template for the imported VM, and not always Medium\n1894897 - [v2v][VMIO] VMimport CR is not reported as failed when target VM is deleted during the import\n1895414 - Virt-operator is accepting updates to the placement of its workload components even with running VMs\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1898072 - Add Fedora33 to Fedora common templates\n1898840 - [v2v] VM import VMWare to CNV Import 63 chars vm name should not fail\n1899558 - CNV 2.6 - nmstate fails to set state\n1901480 - VM disk io can\u0027t worked if namespace have label kubemacpool\n1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig)\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1903014 - hco-webhook pod in CreateContainerError\n1903585 - [v2v] Windows 2012 VM imported from RHV goes into Windows repair mode\n1904797 - [VMIO][vmware] A migrated RHEL/Windows VM starts in emergency mode/safe mode when target storage is NFS and target namespace is NOT \"default\"\n1906199 - [CNV-2.5] CNV Tries to Install on Windows Workers\n1907151 - kubevirt version is not reported correctly via virtctl\n1907352 - VM/VMI link changes to `kubevirt.io~v1~VirtualMachineInstance` on CNV 2.6\n1907691 - [CNV] Configuring NodeNetworkConfigurationPolicy caused \"Internal error occurred\" for creating datavolume\n1907988 - VM loses dynamic IP address of its default interface after migration\n1908363 - Applying NodeNetworkConfigurationPolicy for different NIC than default disables br-ex bridge and nodes lose connectivity\n1908421 - [v2v] [VM import RHV to CNV] Windows imported VM boot failed: INACCESSIBLE BOOT DEVICE error\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1909458 - [V2V][VMware to CNV VM import via api using VMIO] VM import  to Ceph RBD/BLOCK fails on \"qemu-img: /data/disk.img\" error\n1910857 - Provide a mechanism to enable the HotplugVolumes feature gate via HCO\n1911118 - Windows VMI LiveMigration / shutdown fails on \u0027XML error: non unique alias detected: ua-\u0027)\n1911396 - Set networkInterfaceMultiqueue false in rhel 6 template for e1000e interface\n1911662 - el6 guests don\u0027t work properly if virtio bus is specified on various devices\n1912908 - Allow using \"scsi\" bus for disks in template validation\n1913248 - Creating vlan interface on top of a bond device via NodeNetworkConfigurationPolicy fails\n1913320 - Informative message needed with virtctl image-upload, that additional step is needed from the user\n1913717 - Users should have read permitions for golden images data volumes\n1913756 - Migrating to Ceph-RBD + Block fails when skipping zeroes\n1914177 - CNV does not preallocate blank file data volumes\n1914608 - Obsolete CPU models (kubevirt-cpu-plugin-configmap) are set on worker nodes\n1914947 - HPP golden images - DV shoudld not be created with WaitForFirstConsumer\n1917908 - [VMIO] vmimport pod fail to create when using ceph-rbd/block\n1917963 - [CNV 2.6] Unable to install CNV disconnected - requires kvm-info-nfd-plugin which is not mirrored\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1920576 - HCO can report ready=true when it failed to create a CR for a component operator\n1920610 - e2e-aws-4.7-cnv consistently failing on Hyperconverged Cluster Operator\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923979 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool\n1927373 - NoExecute taint violates pdb; VMIs are not live migrated\n1931376 - VMs disconnected from nmstate-defined bridge after CNV-2.5.4-\u003eCNV-2.6.0 upgrade\n\n5. \n\nInstallation note:\n\nSafari 13.0.3 may be obtained from the Mac App Store. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: webkitgtk4 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:4035-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:4035\nIssue date:        2020-09-29\nCVE Names:         CVE-2019-6237 CVE-2019-6251 CVE-2019-8506\n                   CVE-2019-8524 CVE-2019-8535 CVE-2019-8536\n                   CVE-2019-8544 CVE-2019-8551 CVE-2019-8558\n                   CVE-2019-8559 CVE-2019-8563 CVE-2019-8571\n                   CVE-2019-8583 CVE-2019-8584 CVE-2019-8586\n                   CVE-2019-8587 CVE-2019-8594 CVE-2019-8595\n                   CVE-2019-8596 CVE-2019-8597 CVE-2019-8601\n                   CVE-2019-8607 CVE-2019-8608 CVE-2019-8609\n                   CVE-2019-8610 CVE-2019-8611 CVE-2019-8615\n                   CVE-2019-8619 CVE-2019-8622 CVE-2019-8623\n                   CVE-2019-8625 CVE-2019-8644 CVE-2019-8649\n                   CVE-2019-8658 CVE-2019-8666 CVE-2019-8669\n                   CVE-2019-8671 CVE-2019-8672 CVE-2019-8673\n                   CVE-2019-8674 CVE-2019-8676 CVE-2019-8677\n                   CVE-2019-8678 CVE-2019-8679 CVE-2019-8680\n                   CVE-2019-8681 CVE-2019-8683 CVE-2019-8684\n                   CVE-2019-8686 CVE-2019-8687 CVE-2019-8688\n                   CVE-2019-8689 CVE-2019-8690 CVE-2019-8707\n                   CVE-2019-8710 CVE-2019-8719 CVE-2019-8720\n                   CVE-2019-8726 CVE-2019-8733 CVE-2019-8735\n                   CVE-2019-8743 CVE-2019-8763 CVE-2019-8764\n                   CVE-2019-8765 CVE-2019-8766 CVE-2019-8768\n                   CVE-2019-8769 CVE-2019-8771 CVE-2019-8782\n                   CVE-2019-8783 CVE-2019-8808 CVE-2019-8811\n                   CVE-2019-8812 CVE-2019-8813 CVE-2019-8814\n                   CVE-2019-8815 CVE-2019-8816 CVE-2019-8819\n                   CVE-2019-8820 CVE-2019-8821 CVE-2019-8822\n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844\n                   CVE-2019-8846 CVE-2019-11070 CVE-2020-3862\n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867\n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894\n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899\n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902\n                   CVE-2020-10018 CVE-2020-11793\n====================================================================\n1. Summary:\n\nAn update for webkitgtk4 is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). (BZ#1817144)\n\nSecurity Fix(es):\n\n* webkitgtk: Multiple security issues (CVE-2019-6237, CVE-2019-6251,\nCVE-2019-8506, CVE-2019-8524, CVE-2019-8535, CVE-2019-8536, CVE-2019-8544,\nCVE-2019-8551, CVE-2019-8558, CVE-2019-8559, CVE-2019-8563, CVE-2019-8571,\nCVE-2019-8583, CVE-2019-8584, CVE-2019-8586, CVE-2019-8587, CVE-2019-8594,\nCVE-2019-8595, CVE-2019-8596, CVE-2019-8597, CVE-2019-8601, CVE-2019-8607,\nCVE-2019-8608, CVE-2019-8609, CVE-2019-8610, CVE-2019-8611, CVE-2019-8615,\nCVE-2019-8619, CVE-2019-8622, CVE-2019-8623, CVE-2019-8625, CVE-2019-8644,\nCVE-2019-8649, CVE-2019-8658, CVE-2019-8666, CVE-2019-8669, CVE-2019-8671,\nCVE-2019-8672, CVE-2019-8673, CVE-2019-8674, CVE-2019-8676, CVE-2019-8677,\nCVE-2019-8678, CVE-2019-8679, CVE-2019-8680, CVE-2019-8681, CVE-2019-8683,\nCVE-2019-8684, CVE-2019-8686, CVE-2019-8687, CVE-2019-8688, CVE-2019-8689,\nCVE-2019-8690, CVE-2019-8707, CVE-2019-8710, CVE-2019-8719, CVE-2019-8720,\nCVE-2019-8726, CVE-2019-8733, CVE-2019-8735, CVE-2019-8743, CVE-2019-8763,\nCVE-2019-8764, CVE-2019-8765, CVE-2019-8766, CVE-2019-8768, CVE-2019-8769,\nCVE-2019-8771, CVE-2019-8782, CVE-2019-8783, CVE-2019-8808, CVE-2019-8811,\nCVE-2019-8812, CVE-2019-8813, CVE-2019-8814, CVE-2019-8815, CVE-2019-8816,\nCVE-2019-8819, CVE-2019-8820, CVE-2019-8821, CVE-2019-8822, CVE-2019-8823,\nCVE-2019-8835, CVE-2019-8844, CVE-2019-8846, CVE-2019-11070, CVE-2020-3862,\nCVE-2020-3864, CVE-2020-3865, CVE-2020-3867, CVE-2020-3868, CVE-2020-3885,\nCVE-2020-3894, CVE-2020-3895, CVE-2020-3897, CVE-2020-3899, CVE-2020-3900,\nCVE-2020-3901, CVE-2020-3902, CVE-2020-10018, CVE-2020-11793)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-6237\nhttps://access.redhat.com/security/cve/CVE-2019-6251\nhttps://access.redhat.com/security/cve/CVE-2019-8506\nhttps://access.redhat.com/security/cve/CVE-2019-8524\nhttps://access.redhat.com/security/cve/CVE-2019-8535\nhttps://access.redhat.com/security/cve/CVE-2019-8536\nhttps://access.redhat.com/security/cve/CVE-2019-8544\nhttps://access.redhat.com/security/cve/CVE-2019-8551\nhttps://access.redhat.com/security/cve/CVE-2019-8558\nhttps://access.redhat.com/security/cve/CVE-2019-8559\nhttps://access.redhat.com/security/cve/CVE-2019-8563\nhttps://access.redhat.com/security/cve/CVE-2019-8571\nhttps://access.redhat.com/security/cve/CVE-2019-8583\nhttps://access.redhat.com/security/cve/CVE-2019-8584\nhttps://access.redhat.com/security/cve/CVE-2019-8586\nhttps://access.redhat.com/security/cve/CVE-2019-8587\nhttps://access.redhat.com/security/cve/CVE-2019-8594\nhttps://access.redhat.com/security/cve/CVE-2019-8595\nhttps://access.redhat.com/security/cve/CVE-2019-8596\nhttps://access.redhat.com/security/cve/CVE-2019-8597\nhttps://access.redhat.com/security/cve/CVE-2019-8601\nhttps://access.redhat.com/security/cve/CVE-2019-8607\nhttps://access.redhat.com/security/cve/CVE-2019-8608\nhttps://access.redhat.com/security/cve/CVE-2019-8609\nhttps://access.redhat.com/security/cve/CVE-2019-8610\nhttps://access.redhat.com/security/cve/CVE-2019-8611\nhttps://access.redhat.com/security/cve/CVE-2019-8615\nhttps://access.redhat.com/security/cve/CVE-2019-8619\nhttps://access.redhat.com/security/cve/CVE-2019-8622\nhttps://access.redhat.com/security/cve/CVE-2019-8623\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8644\nhttps://access.redhat.com/security/cve/CVE-2019-8649\nhttps://access.redhat.com/security/cve/CVE-2019-8658\nhttps://access.redhat.com/security/cve/CVE-2019-8666\nhttps://access.redhat.com/security/cve/CVE-2019-8669\nhttps://access.redhat.com/security/cve/CVE-2019-8671\nhttps://access.redhat.com/security/cve/CVE-2019-8672\nhttps://access.redhat.com/security/cve/CVE-2019-8673\nhttps://access.redhat.com/security/cve/CVE-2019-8674\nhttps://access.redhat.com/security/cve/CVE-2019-8676\nhttps://access.redhat.com/security/cve/CVE-2019-8677\nhttps://access.redhat.com/security/cve/CVE-2019-8678\nhttps://access.redhat.com/security/cve/CVE-2019-8679\nhttps://access.redhat.com/security/cve/CVE-2019-8680\nhttps://access.redhat.com/security/cve/CVE-2019-8681\nhttps://access.redhat.com/security/cve/CVE-2019-8683\nhttps://access.redhat.com/security/cve/CVE-2019-8684\nhttps://access.redhat.com/security/cve/CVE-2019-8686\nhttps://access.redhat.com/security/cve/CVE-2019-8687\nhttps://access.redhat.com/security/cve/CVE-2019-8688\nhttps://access.redhat.com/security/cve/CVE-2019-8689\nhttps://access.redhat.com/security/cve/CVE-2019-8690\nhttps://access.redhat.com/security/cve/CVE-2019-8707\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8719\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8726\nhttps://access.redhat.com/security/cve/CVE-2019-8733\nhttps://access.redhat.com/security/cve/CVE-2019-8735\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8763\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8765\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8768\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8821\nhttps://access.redhat.com/security/cve/CVE-2019-8822\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-11070\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX3OjINzjgjWX9erEAQjqsg/9FnSEJ3umFx0gtnsZIVRP9YxMIVZhVQ8z\nrNnK/LGQWq1nPlNC5OF60WRcWA7cC74lh1jl/+xU6p+9JXTq9y9hQTd7Fcf+6T01\nRYj2zJe6kGBY/53rhZJKCdb9zNXz1CkqsuvTPqVGIabUWTTlsBFnd6l4GK6QL4kM\nXVQZyWtmSfmLII4Ocdav9WocJzH6o1TbEo+O9Fm6WjdVOK+/+VzPki0/dW50CQAK\nR8u5tTXZR5m52RLmvhs/LTv3yUnmhEkhvrR0TtuR8KRfcP1/ytNwn3VidFefuAO1\nPWrgpjIPWy/kbtZaZWK4fBblYj6bKCVD1SiBKQcOfCq0f16aqRP2niFoDXdAy467\neGu0JHkRsIRCLG2rY+JfOau5KtLRhRr0iRe5AhOVpAtUelzjAvEQEcVv4GmZXcwX\nrXfeagSjWzdo8Mf55d7pjORXAKhGdO3FQSeiCvzq9miZq3NBX4Jm4raobeskw/rJ\n1ONqg4fE7Gv7rks8QOy5xErwI8Ut1TGJAgYOD8rmRptr05hBWQFJCfmoc4KpxsMe\nPJoRag0AZfYxYoMe5avMcGCYHosU63z3wS7gao9flj37NkEi6M134vGmCpPNmpGr\nw5HQly9SO3mD0a92xOUn42rrXq841ZkVu89fR6j9wBn8NAKLWH6eUjZkVMNmLRzh\nPKg+HFNkMjk=dS3G\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8811",
        "trust": 2.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155069",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4013",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155216",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-14242",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-160246",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8811",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "155059",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "id": "VAR-201912-1850",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:35:23.528000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=105613"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      },
      {
        "title": "eevee",
        "trust": 0.1,
        "url": "https://github.com/jfmcoronel/eevee "
      },
      {
        "title": "DIE",
        "trust": 0.1,
        "url": "https://github.com/sslab-gatech/DIE "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210721"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210723"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210724"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210725"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210726"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210727"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210728"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://wpewebkit.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://webkitgtk.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210725"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210728"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155069/apple-security-advisory-2019-10-29-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155216/webkitgtk-wpe-webkit-code-execution-xss.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4013/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4233/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-30746"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.2,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/jfmcoronel/eevee"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/sslab-gatech/die"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8787"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8794"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8798"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8797"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8803"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8795"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "date": "2022-08-09T14:36:05",
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2019-11-01T17:11:43",
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2019-11-01T17:06:21",
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2019-10-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      },
      {
        "date": "2019-12-18T18:15:43.583000",
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160246"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8811"
      },
      {
        "date": "2022-03-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      },
      {
        "date": "2024-11-21T04:50:31.127000",
        "db": "NVD",
        "id": "CVE-2019-8811"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple Apple Product Buffer Error Vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1778"
      }
    ],
    "trust": 0.6
  }
}

VAR-202002-1478

Vulnerability from variot - Updated: 2025-12-22 23:32

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. WebKit Page Loading is one of the page loading components. Apple tvOS is a smart TV operating system. The product supports storage of music, photos, App and contacts, etc. A security vulnerability exists in the WebKit Page Loading component in several Apple products. The following products and versions are affected: Windows-based iCloud versions prior to 10.9.2 and 7.17; Windows-based iTunes versions prior to 12.10.4; Apple tvOS versions prior to 13.3.1; Safari versions prior to 13.0.5. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:0055

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
  • grafana: Snapshot authentication bypass (CVE-2021-39226)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
  • grafana: directory traversal vulnerability (CVE-2021-43813)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64

The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x

The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le

The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform 1976894 - Unidling a StatefulSet does not work as expected 1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases 1977414 - Build Config timed out waiting for condition 400: Bad Request 1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus 1978528 - systemd-coredump started and failed intermittently for unknown reasons 1978581 - machine-config-operator: remove runlevel from mco namespace 1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable 1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9 1979966 - OCP builds always fail when run on RHEL7 nodes 1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading 1981549 - Machine-config daemon does not recover from broken Proxy configuration 1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel] 1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues 1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page 1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands 1982662 - Workloads - DaemonSets - Add storage: i18n misses 1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters 1983758 - upgrades are failing on disruptive tests 1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma" 1984592 - global pull secret not working in OCP4.7.4+ for additional private registries 1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs 1985486 - Cluster Proxy not used during installation on OSP with Kuryr 1985724 - VM Details Page missing translations 1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted 1985933 - Downstream image registry recommendation 1985965 - oVirt CSI driver does not report volume stats 1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding" 1986237 - "MachineNotYetDeleted" in Pending state , alert not fired 1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid" 1986302 - console continues to fetch prometheus alert and silences for normal user 1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI 1986338 - error creating list of resources in Import YAML 1986502 - yaml multi file dnd duplicates previous dragged files 1986819 - fix string typos for hot-plug disks 1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure 1987136 - Declare operatorframework.io/arch. labels for all operators 1987257 - Go-http-client user-agent being used for oc adm mirror requests 1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold 1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP 1988406 - SSH key dropped when selecting "Customize virtual machine" in UI 1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade 1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs 1989438 - expected replicas is wrong 1989502 - Developer Catalog is disappearing after short time 1989843 - 'More' and 'Show Less' functions are not translated on several page 1990014 - oc debug does not work for Windows pods 1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created 1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page 1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar 1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI 1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks 1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var 1990625 - Ironic agent registers with SLAAC address with privacy-stable 1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time 1991067 - github.com can not be resolved inside pods where cluster is running on openstack. 1991573 - Enable typescript strictNullCheck on network-policies files 1991641 - Baremetal Cluster Operator still Available After Delete Provisioning 1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator 1991819 - Misspelled word "ocurred" in oc inspect cmd 1991942 - Alignment and spacing fixes 1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked 1992453 - The configMap failed to save on VM environment tab 1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab 1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab 1992509 - Could not customize boot source due to source PVC not found 1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1992580 - storageProfile should stay with the same value by check/uncheck the apply button 1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply 1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios 1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed) 1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing 1994094 - Some hardcodes are detected at the code level in OpenShift console components 1994142 - Missing required cloud config fields for IBM Cloud 1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools 1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart 1995335 - [SCALE] ovnkube CNI: remove ovs flows check 1995493 - Add Secret to workload button and Actions button are not aligned on secret details page 1995531 - Create RDO-based Ironic image to be promoted to OKD 1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator 1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread 1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole 1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN 1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down 1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page 1996647 - Provide more useful degraded message in auth operator on DNS errors 1996736 - Large number of 501 lr-policies in INCI2 env 1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes 1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP 1996928 - Enable default operator indexes on ARM 1997028 - prometheus-operator update removes env var support for thanos-sidecar 1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used 1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. 1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI 1997269 - Have to refresh console to install kube-descheduler 1997478 - Storage operator is not available after reboot cluster instances 1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 1997967 - storageClass is not reserved from default wizard to customize wizard 1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order 1998038 - [e2e][automation] add tests for UI for VM disk hot-plug 1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus 1998174 - Create storageclass gp3-csi after install ocp cluster on aws 1998183 - "r: Bad Gateway" info is improper 1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected 1998377 - Filesystem table head is not full displayed in disk tab 1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory 1998519 - Add fstype when create localvolumeset instance on web console 1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses 1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page 1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable 1999091 - Console update toast notification can appear multiple times 1999133 - removing and recreating static pod manifest leaves pod in error state 1999246 - .indexignore is not ingore when oc command load dc configuration 1999250 - ArgoCD in GitOps operator can't manage namespaces 1999255 - ovnkube-node always crashes out the first time it starts 1999261 - ovnkube-node log spam (and security token leak?) 1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page 1999314 - console-operator is slow to mark Degraded as False once console starts working 1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck) 1999556 - "master" pool should be updated before the CVO reports available at the new version occurred 1999578 - AWS EFS CSI tests are constantly failing 1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages 1999619 - cloudinit is malformatted if a user sets a password during VM creation flow 1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow 1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined 1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) 1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource 1999771 - revert "force cert rotation every couple days for development" in 4.10 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. 1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions 1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form 1999983 - No way to clear upload error from template boot source 2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter 2000096 - Git URL is not re-validated on edit build-config form reload 2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig 2000236 - Confusing usage message from dynkeepalived CLI 2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported 2000430 - bump cluster-api-provider-ovirt version in installer 2000450 - 4.10: Enable static PV multi-az test 2000490 - All critical alerts shipped by CMO should have links to a runbook 2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded) 2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster 2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled 2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console 2000754 - IPerf2 tests should be lower 2000846 - Structure logs in the entire codebase of Local Storage Operator 2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24 2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2000938 - CVO does not respect changes to a Deployment strategy 2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy 2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone 2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole 2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error 2001337 - Details Card in ODF Dashboard mentions OCS 2001339 - fix text content hotplug 2001413 - [e2e][automation] add/delete nic and disk to template 2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log 2001442 - Empty termination.log file for the kube-apiserver has too permissive mode 2001479 - IBM Cloud DNS unable to create/update records 2001566 - Enable alerts for prometheus operator in UWM 2001575 - Clicking on the perspective switcher shows a white page with loader 2001577 - Quick search placeholder is not displayed properly when the search string is removed 2001578 - [e2e][automation] add tests for vm dashboard tab 2001605 - PVs remain in Released state for a long time after the claim is deleted 2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options 2001620 - Cluster becomes degraded if it can't talk to Manila 2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF 2001761 - Unable to apply cluster operator storage for SNO on GCP platform. 2001765 - Some error message in the log of diskmaker-manager caused confusion 2001784 - show loading page before final results instead of showing a transient message No log files exist 2001804 - Reload feature on Environment section in Build Config form does not work properly 2001810 - cluster admin unable to view BuildConfigs in all namespaces 2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well 2001823 - OCM controller must update operator status 2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start 2001835 - Could not select image tag version when create app from dev console 2001855 - Add capacity is disabled for ocs-storagecluster 2001856 - Repeating event: MissingVersion no image found for operand pod 2001959 - Side nav list borders don't extend to edges of container 2002007 - Layout issue on "Something went wrong" page 2002010 - ovn-kube may never attempt to retry a pod creation 2002012 - Cannot change volume mode when cloning a VM from a template 2002027 - Two instances of Dotnet helm chart show as one in topology 2002075 - opm render does not automatically pulling in the image(s) used in the deployments 2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster 2002125 - Network policy details page heading should be updated to Network Policy details 2002133 - [e2e][automation] add support/virtualization and improve deleteResource 2002134 - [e2e][automation] add test to verify vm details tab 2002215 - Multipath day1 not working on s390x 2002238 - Image stream tag is not persisted when switching from yaml to form editor 2002262 - [vSphere] Incorrect user agent in vCenter sessions list 2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector 2002276 - OLM fails to upgrade operators immediately 2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods 2002354 - Missing DU configuration "Done" status reporting during ZTP flow 2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs 2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation 2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN 2002397 - Resources search is inconsistent 2002434 - CRI-O leaks some children PIDs 2002443 - Getting undefined error on create local volume set page 2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy 2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods. 2002559 - User preference for topology list view does not follow when a new namespace is created 2002567 - Upstream SR-IOV worker doc has broken links 2002588 - Change text to be sentence case to align with PF 2002657 - ovn-kube egress IP monitoring is using a random port over the node network 2002713 - CNO: OVN logs should have millisecond resolution 2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event 2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite 2002763 - Two storage systems getting created with external mode RHCS 2002808 - KCM does not use web identity credentials 2002834 - Cluster-version operator does not remove unrecognized volume mounts 2002896 - Incorrect result return when user filter data by name on search page 2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- " 2003096 - [e2e][automation] check bootsource URL is displaying on review step 2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role 2003120 - CI: Uncaught error with ResizeObserver on operand details page 2003145 - Duplicate operand tab titles causes "two children with the same key" warning 2003164 - OLM, fatal error: concurrent map writes 2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form 2003193 - Kubelet/crio leaks netns and veth ports in the host 2003195 - OVN CNI should ensure host veths are removed 2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images 2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI 2003244 - Revert libovsdb client code 2003251 - Patternfly components with list element has list item bullet when they should not. 2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI 2003269 - Rejected pods should be filtered from admission regression 2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release 2003426 - [e2e][automation] add test for vm details bootorder 2003496 - [e2e][automation] add test for vm resources requirment settings 2003641 - All metal ipi jobs are failing in 4.10 2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state 2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node 2003683 - Samples operator is panicking in CI 2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page 2003715 - Error on creating local volume set after selection of the volume mode 2003743 - Remove workaround keeping /boot RW for kdump support 2003775 - etcd pod on CrashLoopBackOff after master replacement procedure 2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver 2003792 - Monitoring metrics query graph flyover panel is useless 2003808 - Add Sprint 207 translations 2003845 - Project admin cannot access image vulnerabilities view 2003859 - sdn emits events with garbage messages 2003896 - (release-4.10) ApiRequestCounts conditional gatherer 2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas 2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes 2004059 - [e2e][automation] fix current tests for downstream 2004060 - Trying to use basic spring boot sample causes crash on Firefox 2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection 2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently 2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver 2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory 2004449 - Boot option recovery menu prevents image boot 2004451 - The backup filename displayed in the RecentBackup message is incorrect 2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts 2004508 - TuneD issues with the recent ConfigParser changes. 2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions 2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs 2004578 - Monitoring and node labels missing for an external storage platform 2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days 2004596 - [4.10] Bootimage bump tracker 2004597 - Duplicate ramdisk log containers running 2004600 - Duplicate ramdisk log containers running 2004609 - output of "crictl inspectp" is not complete 2004625 - BMC credentials could be logged if they change 2004632 - When LE takes a large amount of time, multiple whereabouts are seen 2004721 - ptp/worker custom threshold doesn't change ptp events threshold 2004736 - [knative] Create button on new Broker form is inactive despite form being filled 2004796 - [e2e][automation] add test for vm scheduling policy 2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque 2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card 2004901 - [e2e][automation] improve kubevirt devconsole tests 2004962 - Console frontend job consuming too much CPU in CI 2005014 - state of ODF StorageSystem is misreported during installation or uninstallation 2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines 2005179 - pods status filter is not taking effect 2005182 - sync list of deprecated apis about to be removed 2005282 - Storage cluster name is given as title in StorageSystem details page 2005355 - setuptools 58 makes Kuryr CI fail 2005407 - ClusterNotUpgradeable Alert should be set to Severity Info 2005415 - PTP operator with sidecar api configured throws bind: address already in use 2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console 2005554 - The switch status of the button "Show default project" is not revealed correctly in code 2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 2005761 - QE - Implementing crw-basic feature file 2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow 2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty 2005854 - SSH NodePort service is created for each VM 2005901 - KS, KCM and KA going Degraded during master nodes upgrade 2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user 2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics 2005971 - Change telemeter to report the Application Services product usage metrics 2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files 2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased 2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types 2006101 - Power off fails for drivers that don't support Soft power off 2006243 - Metal IPI upgrade jobs are running out of disk space 2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address 2006308 - Backing Store YAML tab on click displays a blank screen on UI 2006325 - Multicast is broken across nodes 2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators 2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource 2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception" 2006714 - add retry for etcd errors in kube-apiserver 2006767 - KubePodCrashLooping may not fire 2006803 - Set CoreDNS cache entries for forwarded zones 2006861 - Add Sprint 207 part 2 translations 2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap 2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors 2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded 2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick 2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails 2007271 - CI Integration for Knative test cases 2007289 - kubevirt tests are failing in CI 2007322 - Devfile/Dockerfile import does not work for unsupported git host 2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2008521 - gcp-hostname service should correct invalid search entries in resolv.conf 2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount 2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror 2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers 2008599 - Azure Stack UPI does not have Internal Load Balancer 2008612 - Plugin asset proxy does not pass through browser cache headers 2008712 - VPA webhook timeout prevents all pods from starting 2008733 - kube-scheduler: exposed /debug/pprof port 2008911 - Prometheus repeatedly scaling prometheus-operator replica set 2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12 2009055 - Instances of OCS to be replaced with ODF on UI 2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs 2009083 - opm blocks pruning of existing bundles during add 2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances 2009131 - [e2e][automation] add more test about vmi 2009148 - [e2e][automation] test vm nic presets and options 2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator 2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family 2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted 2009384 - UI changes to support BindableKinds CRD changes 2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped 2009424 - Deployment upgrade is failing availability check 2009454 - Change web terminal subscription permissions from get to list 2009465 - container-selinux should come from rhel8-appstream 2009514 - Bump OVS to 2.16-15 2009555 - Supermicro X11 system not booting from vMedia with AI 2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points 2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow 2009699 - Failure to validate flavor RAM 2009754 - Footer is not sticky anymore in import forms 2009785 - CRI-O's version file should be pinned by MCO 2009791 - Installer: ibmcloud ignores install-config values 2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13 2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo 2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2009873 - Stale Logical Router Policies and Annotations for a given node 2009879 - There should be test-suite coverage to ensure admin-acks work as expected 2009888 - SRO package name collision between official and community version 2010073 - uninstalling and then reinstalling sriov-network-operator is not working 2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. 2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default 2013751 - Service details page is showing wrong in-cluster hostname 2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page 2013871 - Resource table headings are not aligned with their column data 2013895 - Cannot enable accelerated network via MachineSets on Azure 2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude" 2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG) 2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain 2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab) 2013996 - Project detail page: Action "Delete Project" does nothing for the default project 2014071 - Payload imagestream new tags not properly updated during cluster upgrade 2014153 - SRIOV exclusive pooling 2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace 2014238 - AWS console test is failing on importing duplicate YAML definitions 2014245 - Several aria-labels, external links, and labels aren't internationalized 2014248 - Several files aren't internationalized 2014352 - Could not filter out machine by using node name on machines page 2014464 - Unexpected spacing/padding below navigation groups in developer perspective 2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages 2014486 - Integration Tests: OLM single namespace operator tests failing 2014488 - Custom operator cannot change orders of condition tables 2014497 - Regex slows down different forms and creates too much recursion errors in the log 2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id' 2014614 - Metrics scraping requests should be assigned to exempt priority level 2014710 - TestIngressStatus test is broken on Azure 2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly 2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile 2015115 - [RFE] PCI passthrough 2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter 2015154 - Support ports defined networks and primarySubnet 2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic 2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production 2015386 - Possibility to add labels to the built-in OCP alerts 2015395 - Table head on Affinity Rules modal is not fully expanded 2015416 - CI implementation for Topology plugin 2015418 - Project Filesystem query returns No datapoints found 2015420 - No vm resource in project view's inventory 2015422 - No conflict checking on snapshot name 2015472 - Form and YAML view switch button should have distinguishable status 2015481 - [4.10] sriov-network-operator daemon pods are failing to start 2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting 2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English 2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2017427 - NTO does not restart TuneD daemon when profile application is taking too long 2017535 - Broken Argo CD link image on GitOps Details Page 2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references 2017564 - On-prem prepender dispatcher script overwrites DNS search settings 2017565 - CCMO does not handle additionalTrustBundle on Azure Stack 2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice 2017606 - [e2e][automation] add test to verify send key for VNC console 2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes 2017656 - VM IP address is "undefined" under VM details -> ssh field 2017663 - SSH password authentication is disabled when public key is not supplied 2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP 2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set 2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource 2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults 2017761 - [e2e][automation] dummy bug for 4.9 test dependency 2017872 - Add Sprint 209 translations 2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances 2017879 - Add Chinese translation for "alternate" 2017882 - multus: add handling of pod UIDs passed from runtime 2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods 2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI 2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS 2018094 - the tooltip length is limited 2018152 - CNI pod is not restarted when It cannot start servers due to ports being used 2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time 2018234 - user settings are saved in local storage instead of on cluster 2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?) 2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports) 2018275 - Topology graph doesn't show context menu for Export CSV 2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked 2018380 - Migrate docs links to access.redhat.com 2018413 - Error: context deadline exceeded, OCP 4.8.9 2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked 2018445 - [e2e][automation] enhance tests for downstream 2018446 - [e2e][automation] move tests to different level 2018449 - [e2e][automation] add test about create/delete network attachment definition 2018490 - [4.10] Image provisioning fails with file name too long 2018495 - Fix typo in internationalization README 2018542 - Kernel upgrade does not reconcile DaemonSet 2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit 2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes 2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950 2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10 2018985 - The rootdisk size is 15Gi of windows VM in customize wizard 2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. 2019096 - Update SRO leader election timeout to support SNO 2019129 - SRO in operator hub points to wrong repo for README 2019181 - Performance profile does not apply 2019198 - ptp offset metrics are not named according to the log output 2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest 2019284 - Stop action should not in the action list while VMI is not running 2019346 - zombie processes accumulation and Argument list too long 2019360 - [RFE] Virtualization Overview page 2019452 - Logger object in LSO appends to existing logger recursively 2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect 2019634 - Pause and migration is enabled in action list for a user who has view only permission 2019636 - Actions in VM tabs should be disabled when user has view only permission 2019639 - "Take snapshot" should be disabled while VM image is still been importing 2019645 - Create button is not removed on "Virtual Machines" page for view only user 2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user 2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user 2019717 - cant delete VM with un-owned pvc attached 2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass 2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always" 2019744 - [RFE] Suggest users to download newest RHEL 8 version 2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level 2019827 - Display issue with top-level menu items running demo plugin 2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded 2019886 - Kuryr unable to finish ports recovery upon controller restart 2019948 - [RFE] Restructring Virtualization links 2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster 2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout 2019986 - Dynamic demo plugin fails to build 2019992 - instance:node_memory_utilisation:ratio metric is incorrect 2020001 - Update dockerfile for demo dynamic plugin to reflect dir change 2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation. 2020107 - cluster-version-operator: remove runlevel from CVO namespace 2020153 - Creation of Windows high performance VM fails 2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public 2020250 - Replacing deprecated ioutil 2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build 2020275 - ClusterOperators link in console returns blank page during upgrades 2020377 - permissions error while using tcpdump option with must-gather 2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined 2020498 - "Show PromQL" button is disabled 2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature 2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI 2020664 - DOWN subports are not cleaned up 2020904 - When trying to create a connection from the Developer view between VMs, it fails 2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana 2021017 - 404 page not found error on knative eventing page 2021031 - QE - Fix the topology CI scripts 2021048 - [RFE] Added MAC Spoof check 2021053 - Metallb operator presented as community operator 2021067 - Extensive number of requests from storage version operator in cluster 2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes 2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass 2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node 2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating 2021152 - imagePullPolicy is "Always" for ptp operator images 2021191 - Project admins should be able to list available network attachment defintions 2021205 - Invalid URL in git import form causes validation to not happen on URL change 2021322 - cluster-api-provider-azure should populate purchase plan information 2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind 2021364 - Installer requires invalid AWS permission s3:GetBucketReplication 2021400 - Bump documentationBaseURL to 4.10 2021405 - [e2e][automation] VM creation wizard Cloud Init editor 2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected 2021466 - [e2e][automation] Windows guest tool mount 2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver 2021551 - Build is not recognizing the USER group from an s2i image 2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character 2021629 - api request counts for current hour are incorrect 2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page 2021693 - Modals assigned modal-lg class are no longer the correct width 2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines 2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled 2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags 2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem 2022053 - dpdk application with vhost-net is not able to start 2022114 - Console logging every proxy request 2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent) 2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long 2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2022509 - getOverrideForManifest does not check manifest.GVK.Group 2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard 2022627 - Machine object not picking up external FIP added to an openstack vm 2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:' 2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox 2022801 - Add Sprint 210 translations 2022811 - Fix kubelet log rotation file handle leak 2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations 2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2022880 - Pipeline renders with minor visual artifact with certain task dependencies 2022886 - Incorrect URL in operator description 2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config 2023060 - [e2e][automation] Windows VM with CDROM migration 2023077 - [e2e][automation] Home Overview Virtualization status 2023090 - [e2e][automation] Examples of Import URL for VM templates 2023102 - [e2e][automation] Cloudinit disk of VM from custom template 2023216 - ACL for a deleted egressfirewall still present on node join switch 2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9 2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy 2023342 - SCC admission should take ephemeralContainers into account 2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden) 2023434 - Update Azure Machine Spec API to accept Marketplace Images 2023500 - Latency experienced while waiting for volumes to attach to node 2023522 - can't remove package from index: database is locked 2023560 - "Network Attachment Definitions" has no project field on the top in the list view 2023592 - [e2e][automation] add mac spoof check for nad 2023604 - ACL violation when deleting a provisioning-configuration resource 2023607 - console returns blank page when normal user without any projects visit Installed Operators page 2023638 - Downgrade support level for extended control plane integration to Dev Preview 2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10 2023675 - Changing CNV Namespace 2023779 - Fix Patch 104847 in 4.9 2023781 - initial hardware devices is not loading in wizard 2023832 - CCO updates lastTransitionTime for non-Status changes 2023839 - Bump recommended FCOS to 34.20211031.3.0 2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly 2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository 2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8 2024055 - External DNS added extra prefix for the TXT record 2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully 2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json 2024199 - 400 Bad Request error for some queries for the non admin user 2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode 2024262 - Sample catalog is not displayed when one API call to the backend fails 2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability 2024316 - modal about support displays wrong annotation 2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected 2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page 2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view 2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined 2024515 - test-blocker: Ceph-storage-plugin tests failing 2024535 - hotplug disk missing OwnerReference 2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image 2024547 - Detail page is breaking for namespace store , backing store and bucket class. 2024551 - KMS resources not getting created for IBM FlashSystem storage 2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel 2024613 - pod-identity-webhook starts without tls 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded 2024665 - Bindable services are not shown on topology 2024731 - linuxptp container: unnecessary checking of interfaces 2024750 - i18n some remaining OLM items 2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured 2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack 2024841 - test Keycloak with latest tag 2024859 - Not able to deploy an existing image from private image registry using developer console 2024880 - Egress IP breaks when network policies are applied 2024900 - Operator upgrade kube-apiserver 2024932 - console throws "Unauthorized" error after logging out 2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up 2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick 2025230 - ClusterAutoscalerUnschedulablePods should not be a warning 2025266 - CreateResource route has exact prop which need to be removed 2025301 - [e2e][automation] VM actions availability in different VM states 2025304 - overwrite storage section of the DV spec instead of the pvc section 2025431 - [RFE]Provide specific windows source link 2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36 2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node 2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local 2025481 - Update VM Snapshots UI 2025488 - [DOCS] Update the doc for nmstate operator installation 2025592 - ODC 4.9 supports invalid devfiles only 2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings" 2025767 - VMs orphaned during machineset scaleup 2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard 2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin- 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2026560 - Cluster-version operator does not remove unrecognized volume mounts 2026699 - fixed a bug with missing metadata 2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator 2026898 - Description/details are missing for Local Storage Operator 2027132 - Use the specific icon for Fedora and CentOS template 2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend 2027272 - KubeMemoryOvercommit alert should be human readable 2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group 2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue) 2027299 - The status of checkbox component is not revealed correctly in code 2027311 - K8s watch hooks do not work when fetching core resources 2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation 2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images 2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation 2027498 - [IBMCloud] SG Name character length limitation 2027501 - [4.10] Bootimage bump tracker 2027524 - Delete Application doesn't delete Channels or Brokers 2027563 - e2e/add-flow-ci.feature fix accessibility violations 2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges 2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions 2027685 - openshift-cluster-csi-drivers pods crashing on PSI 2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced 2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string 2027917 - No settings in hostfirmwaresettings and schema objects for masters 2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf 2027982 - nncp stucked at ConfigurationProgressing 2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters 2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed 2028030 - Panic detected in cluster-image-registry-operator pod 2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found" 2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9 2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin 2028141 - Console tests doesn't pass on Node.js 15 and 16 2028160 - Remove i18nKey in network-policy-peer-selectors.tsx 2028162 - Add Sprint 210 translations 2028170 - Remove leading and trailing whitespace 2028174 - Add Sprint 210 part 2 translations 2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it 2028217 - Cluster-version operator does not default Deployment replicas to one 2028240 - Multiple CatalogSources causing higher CPU use than necessary 2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings 2028325 - disableDrain should be set automatically on SNO 2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel 2028531 - Missing netFilter to the list of parameters when platform is OpenStack 2028610 - Installer doesn't retry on GCP rate limiting 2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting 2028695 - destroy cluster does not prune bootstrap instance profile 2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs 2028802 - CRI-O panic due to invalid memory address or nil pointer dereference 2028816 - VLAN IDs not released on failures 2028881 - Override not working for the PerformanceProfile template 2028885 - Console should show an error context if it logs an error object 2028949 - Masthead dropdown item hover text color is incorrect 2028963 - Whereabouts should reconcile stranded IP addresses 2029034 - enabling ExternalCloudProvider leads to inoperative cluster 2029178 - Create VM with wizard - page is not displayed 2029181 - Missing CR from PGT 2029273 - wizard is not able to use if project field is "All Projects" 2029369 - Cypress tests github rate limit errors 2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out 2029394 - missing empty text for hardware devices at wizard review 2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used 2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl 2029521 - EFS CSI driver cannot delete volumes under load 2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle 2029579 - Clicking on an Application which has a Helm Release in it causes an error 2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE 2029645 - Sync upstream 1.15.0 downstream 2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing 2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip 2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage 2029785 - CVO panic when an edge is included in both edges and conditionaledges 2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp) 2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error 2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2030228 - Fix StorageSpec resources field to use correct API 2030229 - Mirroring status card reflect wrong data 2030240 - Hide overview page for non-privileged user 2030305 - Export App job do not completes 2030347 - kube-state-metrics exposes metrics about resource annotations 2030364 - Shared resource CSI driver monitoring is not setup correctly 2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets 2030534 - Node selector/tolerations rules are evaluated too early 2030539 - Prometheus is not highly available 2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing 2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation 2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates. 2030677 - BOND CNI: There is no option to configure MTU on a Bond interface 2030692 - NPE in PipelineJobListener.upsertWorkflowJob 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2030847 - PerformanceProfile API version should be v2 2030961 - Customizing the OAuth server URL does not apply to upgraded cluster 2031006 - Application name input field is not autofocused when user selects "Create application" 2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex 2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue 2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip 2031060 - Failing CSR Unit test due to expired test certificate 2031085 - ovs-vswitchd running more threads than expected 2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1 2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2031502 - [RFE] New common templates crash the ui 2031685 - Duplicated forward upstreams should be removed from the dns operator 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive 2031797 - [RFE] Order and text of Boot source type input are wrong 2031826 - CI tests needed to confirm driver-toolkit image contents 2031831 - OCP Console - Global CSS overrides affecting dynamic plugins 2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional 2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled) 2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain) 2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself 2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource 2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64 2032141 - open the alertrule link in new tab, got empty page 2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy 2032296 - Cannot create machine with ephemeral disk on Azure 2032407 - UI will show the default openshift template wizard for HANA template 2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded 2032421 - [RFE] UI integration with automatic updated images 2032516 - Not able to import git repo with .devfile.yaml 2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource 2032547 - hardware devices table have filter when table is empty 2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool 2032566 - Cluster-ingress-router does not support Azure Stack 2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso 2032589 - DeploymentConfigs ignore resolve-names annotation 2032732 - Fix styling conflicts due to recent console-wide CSS changes 2032831 - Knative Services and Revisions are not shown when Service has no ownerReference 2032851 - Networking is "not available" in Virtualization Overview 2032926 - Machine API components should use K8s 1.23 dependencies 2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24 2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster 2033013 - Project dropdown in user preferences page is broken 2033044 - Unable to change import strategy if devfile is invalid 2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable 2033111 - IBM VPC operator library bump removed global CLI args 2033138 - "No model registered for Templates" shows on customize wizard 2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected 2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected 2033257 - unable to use configmap for helm charts 2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered 2033290 - Product builds for console are failing 2033382 - MAPO is missing machine annotations 2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations 2033403 - Devfile catalog does not show provider information 2033404 - Cloud event schema is missing source type and resource field is using wrong value 2033407 - Secure route data is not pre-filled in edit flow form 2033422 - CNO not allowing LGW conversion from SGW in runtime 2033434 - Offer darwin/arm64 oc in clidownloads 2033489 - CCM operator failing on baremetal platform 2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver 2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains 2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady 2033538 - Gather Cost Management Metrics Custom Resource 2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined 2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page 2033634 - list-style-type: disc is applied to the modal dropdowns 2033720 - Update samples in 4.10 2033728 - Bump OVS to 2.16.0-33 2033729 - remove runtime request timeout restriction for azure 2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended 2033749 - Azure Stack Terraform fails without Local Provider 2033750 - Local volume should pull multi-arch image for kube-rbac-proxy 2033751 - Bump kubernetes to 1.23 2033752 - make verify fails due to missing yaml-patch 2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource 2034004 - [e2e][automation] add tests for VM snapshot improvements 2034068 - [e2e][automation] Enhance tests for 4.10 downstream 2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore 2034097 - [OVN] After edit EgressIP object, the status is not correct 2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning 2034129 - blank page returned when clicking 'Get started' button 2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0 2034153 - CNO does not verify MTU migration for OpenShiftSDN 2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled 2034170 - Use function.knative.dev for Knative Functions related labels 2034190 - unable to add new VirtIO disks to VMs 2034192 - Prometheus fails to insert reporting metrics when the sample limit is met 2034243 - regular user cant load template list 2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version" 2034248 - GPU/Host device modal is too small 2034257 - regular user Create VM missing permissions alert 2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere 2034300 - Du validator policy is NonCompliant after DU configuration completed 2034319 - Negation constraint is not validating packages 2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology 2034350 - The CNO should implement the Whereabouts IP reconciliation cron job 2034362 - update description of disk interface 2034398 - The Whereabouts IPPools CRD should include the podref field 2034409 - Default CatalogSources should be pointing to 4.10 index images 2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics 2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode 2034460 - Summary: cloud-network-config-controller does not account for different environment 2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true 2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly 2034493 - Change cluster version operator log level 2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list 2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6 2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer 2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART 2034537 - Update team 2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds 2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success 2034577 - Current OVN gateway mode should be reflected on node annotation as well 2034621 - context menu not popping up for application group 2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10 2034624 - Warn about unsupported CSI driver in vsphere operator 2034647 - missing volumes list in snapshot modal 2034648 - Rebase openshift-controller-manager to 1.23 2034650 - Rebase openshift/builder to 1.23 2034705 - vSphere: storage e2e tests logging configuration data 2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. 2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment 2034785 - ptpconfig with summary_interval cannot be applied 2034823 - RHEL9 should be starred in template list 2034838 - An external router can inject routes if no service is added 2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent 2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty 2034881 - Cloud providers components should use K8s 1.23 dependencies 2034884 - ART cannot build the image because it tries to download controller-gen 2034889 - oc adm prune deployments does not work 2034898 - Regression in recently added Events feature 2034957 - update openshift-apiserver to kube 1.23.1 2035015 - ClusterLogForwarding CR remains stuck remediating forever 2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster 2035141 - [RFE] Show GPU/Host devices in template's details tab 2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC 2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting 2035199 - IPv6 support in mtu-migration-dispatcher.yaml 2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing 2035250 - Peering with ebgp peer over multi-hops doesn't work 2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices 2035315 - invalid test cases for AWS passthrough mode 2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env 2035321 - Add Sprint 211 translations 2035326 - [ExternalCloudProvider] installation with additional network on workers fails 2035328 - Ccoctl does not ignore credentials request manifest marked for deletion 2035333 - Kuryr orphans ports on 504 errors from Neutron 2035348 - Fix two grammar issues in kubevirt-plugin.json strings 2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets 2035409 - OLM E2E test depends on operator package that's no longer published 2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address 2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1' 2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster 2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page 2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers 2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class 2035602 - [e2e][automation] add tests for Virtualization Overview page cards 2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly 2035704 - RoleBindings list page filter doesn't apply 2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing. 2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed 2035772 - AccessMode and VolumeMode is not reserved for customize wizard 2035847 - Two dashes in the Cronjob / Job pod name 2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml 2035882 - [BIOS setting values] Create events for all invalid settings in spec 2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests” 2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen 2035927 - Cannot enable HighNodeUtilization scheduler profile 2035933 - volume mode and access mode are empty in customize wizard review tab 2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods 2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation 2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error 2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential 2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend 2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes 2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23 2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23 2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments 2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists 2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected 2036826 - oc adm prune deployments can prune the RC/RS 2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform 2036861 - kube-apiserver is degraded while enable multitenant 2036937 - Command line tools page shows wrong download ODO link 2036940 - oc registry login fails if the file is empty or stdout 2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container 2036989 - Route URL copy to clipboard button wraps to a separate line by itself 2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters 2036993 - Machine API components should use Go lang version 1.17 2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. 2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api 2037073 - Alertmanager container fails to start because of startup probe never being successful 2037075 - Builds do not support CSI volumes 2037167 - Some log level in ibm-vpc-block-csi-controller are hard code 2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles 2037182 - PingSource badge color is not matched with knativeEventing color 2037203 - "Running VMs" card is too small in Virtualization Overview 2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly 2037237 - Add "This is a CD-ROM boot source" to customize wizard 2037241 - default TTL for noobaa cache buckets should be 0 2037246 - Cannot customize auto-update boot source 2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately 2037288 - Remove stale image reference 2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources 2037483 - Rbacs for Pods within the CBO should be more restrictive 2037484 - Bump dependencies to k8s 1.23 2037554 - Mismatched wave number error message should include the wave numbers that are in conflict 2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform] 2037635 - impossible to configure custom certs for default console route in ingress config 2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8 2037638 - Builds do not support CSI volumes as volume sources 2037664 - text formatting issue in Installed Operators list table 2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037801 - Serverless installation is failing on CI jobs for e2e tests 2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format 2037856 - use lease for leader election 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10 2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests 2037904 - upgrade operator deployment failed due to memory limit too low for manager container 2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation] 2038034 - non-privileged user cannot see auto-update boot source 2038053 - Bump dependencies to k8s 1.23 2038088 - Remove ipa-downloader references 2038160 - The default project missed the annotation : openshift.io/node-selector: "" 2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional 2038196 - must-gather is missing collecting some metal3 resources 2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777) 2038253 - Validator Policies are long lived 2038272 - Failures to build a PreprovisioningImage are not reported 2038384 - Azure Default Instance Types are Incorrect 2038389 - Failing test: [sig-arch] events should not repeat pathologically 2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket 2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips 2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained 2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect 2038663 - update kubevirt-plugin OWNERS 2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new" 2038705 - Update ptp reviewers 2038761 - Open Observe->Targets page, wait for a while, page become blank 2038768 - All the filters on the Observe->Targets page can't work 2038772 - Some monitors failed to display on Observe->Targets page 2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node 2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard 2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation 2038864 - E2E tests fail because multi-hop-net was not created 2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console 2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured 2038968 - Move feature gates from a carry patch to openshift/api 2039056 - Layout issue with breadcrumbs on API explorer page 2039057 - Kind column is not wide enough in API explorer page 2039064 - Bulk Import e2e test flaking at a high rate 2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled 2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters 2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost 2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy 2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator 2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade 2039227 - Improve image customization server parameter passing during installation 2039241 - Improve image customization server parameter passing during installation 2039244 - Helm Release revision history page crashes the UI 2039294 - SDN controller metrics cannot be consumed correctly by prometheus 2039311 - oc Does Not Describe Build CSI Volumes 2039315 - Helm release list page should only fetch secrets for deployed charts 2039321 - SDN controller metrics are not being consumed by prometheus 2039330 - Create NMState button doesn't work in OperatorHub web console 2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations 2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead 2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced 2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message 2040150 - Update ConfigMap keys for IBM HPCS 2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth 2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository 2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp 2040376 - "unknown instance type" error for supported m6i.xlarge instance 2040394 - Controller: enqueue the failed configmap till services update 2040467 - Cannot build ztp-site-generator container image 2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4 2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps 2040535 - Auto-update boot source is not available in customize wizard 2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name 2040603 - rhel worker scaleup playbook failed because missing some dependency of podman 2040616 - rolebindings page doesn't load for normal users 2040620 - [MAPO] Error pulling MAPO image on installation 2040653 - Topology sidebar warns that another component is updated while rendering 2040655 - User settings update fails when selecting application in topology sidebar 2040661 - Different react warnings about updating state on unmounted components when leaving topology 2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation 2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi 2040694 - Three upstream HTTPClientConfig struct fields missing in the operator 2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers 2040710 - cluster-baremetal-operator cannot update BMC subscription CR 2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms 2040782 - Import YAML page blocks input with more then one generateName attribute 2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute 2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator 2040793 - Fix snapshot e2e failures 2040880 - do not block upgrades if we can't connect to vcenter 2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10 2041093 - autounattend.xml missing 2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates 2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped" 2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23 2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller 2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener 2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud 2041466 - Kubedescheduler version is missing from the operator logs 2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses 2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods) 2041492 - Spacing between resources in inventory card is too small 2041509 - GCP Cloud provider components should use K8s 1.23 dependencies 2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook 2041541 - audit: ManagedFields are dropped using API not annotation 2041546 - ovnkube: set election timer at RAFT cluster creation time 2041554 - use lease for leader election 2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected" 2041583 - etcd and api server cpu mask interferes with a guaranteed workload 2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure 2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation 2041620 - bundle CSV alm-examples does not parse 2041641 - Fix inotify leak and kubelet retaining memory 2041671 - Delete templates leads to 404 page 2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category 2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled 2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint 2041763 - The Observe > Alerting pages no longer have their default sort order applied 2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken 2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied 2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster 2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases 2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist 2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen 2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile 2041999 - [PROXY] external dns pod cannot recognize custom proxy CA 2042001 - unexpectedly found multiple load balancers 2042029 - kubedescheduler fails to install completely 2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters 2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs 2042059 - update discovery burst to reflect lots of CRDs on openshift clusters 2042069 - Revert toolbox to rhcos-toolbox 2042169 - Can not delete egressnetworkpolicy in Foreground propagation 2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud 2042274 - Storage API should be used when creating a PVC 2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection 2042366 - Lifecycle hooks should be independently managed 2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway 2042382 - [e2e][automation] CI takes more then 2 hours to run 2042395 - Add prerequisites for active health checks test 2042438 - Missing rpms in openstack-installer image 2042466 - Selection does not happen when switching from Topology Graph to List View 2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver 2042567 - insufficient info on CodeReady Containers configuration 2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk 2042619 - Overview page of the console is broken for hypershift clusters 2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running 2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud 2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud 2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly 2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring) 2042851 - Create template from SAP HANA template flow - VM is created instead of a new template 2042906 - Edit machineset with same machine deletion hook name succeed 2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal" 2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways' 2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2043043 - Cluster Autoscaler should use K8s 1.23 dependencies 2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props) 2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects". 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2044201 - Templates golden image parameters names should be supported 2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8] 2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied" 2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects 2044347 - Bump to kubernetes 1.23.3 2044481 - collect sharedresource cluster scoped instances with must-gather 2044496 - Unable to create hardware events subscription - failed to add finalizers 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2044680 - Additional libovsdb performance and resource consumption fixes 2044704 - Observe > Alerting pages should not show runbook links in 4.10 2044717 - [e2e] improve tests for upstream test environment 2044724 - Remove namespace column on VM list page when a project is selected 2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff 2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD 2045024 - CustomNoUpgrade alerts should be ignored 2045112 - vsphere-problem-detector has missing rbac rules for leases 2045199 - SnapShot with Disk Hot-plug hangs 2045561 - Cluster Autoscaler should use the same default Group value as Cluster API 2045591 - Reconciliation of aws pod identity mutating webhook did not happen 2045849 - Add Sprint 212 translations 2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10 2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin 2045916 - [IBMCloud] Default machine profile in installer is unreliable 2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment 2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify 2046137 - oc output for unknown commands is not human readable 2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance 2046297 - Bump DB reconnect timeout 2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations 2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors 2046626 - Allow setting custom metrics for Ansible-based Operators 2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud 2047025 - Installation fails because of Alibaba CSI driver operator is degraded 2047190 - Bump Alibaba CSI driver for 4.10 2047238 - When using communities and localpreferences together, only localpreference gets applied 2047255 - alibaba: resourceGroupID not found 2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions 2047317 - Update HELM OWNERS files under Dev Console 2047455 - [IBM Cloud] Update custom image os type 2047496 - Add image digest feature 2047779 - do not degrade cluster if storagepolicy creation fails 2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047929 - use lease for leader election 2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2048048 - Application tab in User Preferences dropdown menus are too wide. 2048050 - Topology list view items are not highlighted on keyboard navigation 2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2048413 - Bond CNI: Failed to attach Bond NAD to pod 2048443 - Image registry operator panics when finalizes config deletion 2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-* 2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2048598 - Web terminal view is broken 2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2048891 - Topology page is crashed 2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2049043 - Cannot create VM from template 2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2049886 - Placeholder bug for OCP 4.10.0 metadata release 2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members 2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2050370 - alert data for burn budget needs to be updated to prevent regression 2050393 - ZTP missing support for local image registry and custom machine config 2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2050737 - Remove metrics and events for master port offsets 2050801 - Vsphere upi tries to access vsphere during manifests generation phase 2050883 - Logger object in LSO does not log source location accurately 2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 2052062 - Whereabouts should implement client-go 1.22+ 2052125 - [4.10] Crio appears to be coredumping in some scenarios 2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch 2052756 - [4.10] PVs are not being cleaned up after PVC deletion 2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2053268 - inability to detect static lifecycle failure 2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053323 - OpenShift-Ansible BYOH Unit Tests are Broken 2053339 - Remove dev preview badge from IBM FlashSystem deployment windows 2053751 - ztp-site-generate container is missing convenience entrypoint 2053945 - [4.10] Failed to apply sriov policy on intel nics 2054109 - Missing "app" label 2054154 - RoleBinding in project without subject is causing "Project access" page to fail 2054244 - Latest pipeline run should be listed on the top of the pipeline run list 2054288 - console-master-e2e-gcp-console is broken 2054562 - DPU network operator 4.10 branch need to sync with master 2054897 - Unable to deploy hw-event-proxy operator 2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2055371 - Remove Check which enforces summary_interval must match logSyncInterval 2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API 2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2056479 - ovirt-csi-driver-node pods are crashing intermittently 2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope" 2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes" 2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2056948 - post 1.23 rebase: regression in service-load balancer reliability 2057438 - Service Level Agreement (SLA) always show 'Unknown' 2057721 - Fix Proxy support in RHACM 2.4.2 2057724 - Image creation fails when NMstateConfig CR is empty 2058641 - [4.10] Pod density test causing problems when using kube-burner 2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc

  1. References:

https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Relevant releases/architectures:

Red Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

GNOME is the default desktop environment of Red Hat Enterprise Linux.

The following packages have been upgraded to a later upstream version: gnome-remote-desktop (0.1.8), pipewire (0.3.6), vte291 (0.52.4), webkit2gtk3 (2.28.4), xdg-desktop-portal (1.6.0), xdg-desktop-portal-gtk (1.6.0).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.3 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

GDM must be restarted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):

1207179 - Select items matching non existing pattern does not unselect already selected 1566027 - can't correctly compute contents size if hidden files are included 1569868 - Browsing samba shares using gvfs is very slow 1652178 - [RFE] perf-tool run on wayland 1656262 - The terminal's character display is unclear on rhel8 guest after installing gnome 1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled 1692536 - login screen shows after gnome-initial-setup 1706008 - Sound Effect sometimes fails to change to selected option. 1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. 1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined 1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly 1758891 - tracker-devel subpackage missing from el8 repos 1775345 - Rebase xdg-desktop-portal to 1.6 1778579 - Nautilus does not respect umask settings. 1779691 - Rebase xdg-desktop-portal-gtk to 1.6 1794045 - There are two different high contrast versions of desktop icons 1804719 - Update vte291 to 0.52.4 1805929 - RHEL 8.1 gnome-shell-extension errors 1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp 1814820 - No checkbox to install updates in the shutdown dialog 1816070 - "search for an application to open this file" dialog broken 1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution 1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1817143 - Rebase WebKitGTK to 2.28 1820759 - Include IO stall fixes 1820760 - Include IO fixes 1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening 1827030 - gnome-settings-daemon: subscription notification on CentOS Stream 1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content 1832347 - [Rebase] Rebase pipewire to 0.3.x 1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install 1837381 - Backport screen cast improvements to 8.3 1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version 1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6 1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113 1840080 - Can not control top bar menus via keys in Wayland 1840788 - [flatpak][rhel8] unable to build potrace as dependency 1843486 - Software crash after clicking Updates tab 1844578 - anaconda very rarely crashes at startup with a pygobject traceback 1846191 - usb adapters hotplug crashes gnome-shell 1847051 - JS ERROR: TypeError: area is null 1847061 - File search doesn't work under certain locales 1847062 - gnome-remote-desktop crash on QXL graphics 1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV 1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow 1854734 - PipeWire 0.2 should be required by xdg-desktop-portal 1866332 - Remove obsolete libusb-devel dependency 1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at "Started GNOME Display Manager" - GDM regression issue. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: LibRaw-0.19.5-2.el8.src.rpm PackageKit-1.1.12-6.el8.src.rpm dleyna-renderer-0.6.0-3.el8.src.rpm frei0r-plugins-1.6.1-7.el8.src.rpm gdm-3.28.3-34.el8.src.rpm gnome-control-center-3.28.2-22.el8.src.rpm gnome-photos-3.28.1-3.el8.src.rpm gnome-remote-desktop-0.1.8-3.el8.src.rpm gnome-session-3.28.1-10.el8.src.rpm gnome-settings-daemon-3.32.0-11.el8.src.rpm gnome-shell-3.32.2-20.el8.src.rpm gnome-shell-extensions-3.32.1-11.el8.src.rpm gnome-terminal-3.28.3-2.el8.src.rpm gtk3-3.22.30-6.el8.src.rpm gvfs-1.36.2-10.el8.src.rpm mutter-3.32.2-48.el8.src.rpm nautilus-3.28.1-14.el8.src.rpm pipewire-0.3.6-1.el8.src.rpm pipewire0.2-0.2.7-6.el8.src.rpm potrace-1.15-3.el8.src.rpm tracker-2.1.5-2.el8.src.rpm vte291-0.52.4-2.el8.src.rpm webkit2gtk3-2.28.4-1.el8.src.rpm webrtc-audio-processing-0.3-9.el8.src.rpm xdg-desktop-portal-1.6.0-2.el8.src.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.src.rpm

aarch64: PackageKit-1.1.12-6.el8.aarch64.rpm PackageKit-command-not-found-1.1.12-6.el8.aarch64.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-cron-1.1.12-6.el8.aarch64.rpm PackageKit-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-debugsource-1.1.12-6.el8.aarch64.rpm PackageKit-glib-1.1.12-6.el8.aarch64.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.aarch64.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-gtk3-module-1.1.12-6.el8.aarch64.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.aarch64.rpm frei0r-plugins-1.6.1-7.el8.aarch64.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.aarch64.rpm frei0r-plugins-debugsource-1.6.1-7.el8.aarch64.rpm frei0r-plugins-opencv-1.6.1-7.el8.aarch64.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.aarch64.rpm gdm-3.28.3-34.el8.aarch64.rpm gdm-debuginfo-3.28.3-34.el8.aarch64.rpm gdm-debugsource-3.28.3-34.el8.aarch64.rpm gnome-control-center-3.28.2-22.el8.aarch64.rpm gnome-control-center-debuginfo-3.28.2-22.el8.aarch64.rpm gnome-control-center-debugsource-3.28.2-22.el8.aarch64.rpm gnome-remote-desktop-0.1.8-3.el8.aarch64.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.aarch64.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.aarch64.rpm gnome-session-3.28.1-10.el8.aarch64.rpm gnome-session-debuginfo-3.28.1-10.el8.aarch64.rpm gnome-session-debugsource-3.28.1-10.el8.aarch64.rpm gnome-session-wayland-session-3.28.1-10.el8.aarch64.rpm gnome-session-xsession-3.28.1-10.el8.aarch64.rpm gnome-settings-daemon-3.32.0-11.el8.aarch64.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.aarch64.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.aarch64.rpm gnome-shell-3.32.2-20.el8.aarch64.rpm gnome-shell-debuginfo-3.32.2-20.el8.aarch64.rpm gnome-shell-debugsource-3.32.2-20.el8.aarch64.rpm gnome-terminal-3.28.3-2.el8.aarch64.rpm gnome-terminal-debuginfo-3.28.3-2.el8.aarch64.rpm gnome-terminal-debugsource-3.28.3-2.el8.aarch64.rpm gnome-terminal-nautilus-3.28.3-2.el8.aarch64.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.aarch64.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.aarch64.rpm gtk-update-icon-cache-3.22.30-6.el8.aarch64.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-3.22.30-6.el8.aarch64.rpm gtk3-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-debugsource-3.22.30-6.el8.aarch64.rpm gtk3-devel-3.22.30-6.el8.aarch64.rpm gtk3-devel-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-immodule-xim-3.22.30-6.el8.aarch64.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-tests-debuginfo-3.22.30-6.el8.aarch64.rpm gvfs-1.36.2-10.el8.aarch64.rpm gvfs-afc-1.36.2-10.el8.aarch64.rpm gvfs-afc-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-afp-1.36.2-10.el8.aarch64.rpm gvfs-afp-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-archive-1.36.2-10.el8.aarch64.rpm gvfs-archive-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-client-1.36.2-10.el8.aarch64.rpm gvfs-client-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-debugsource-1.36.2-10.el8.aarch64.rpm gvfs-devel-1.36.2-10.el8.aarch64.rpm gvfs-fuse-1.36.2-10.el8.aarch64.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-goa-1.36.2-10.el8.aarch64.rpm gvfs-goa-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-gphoto2-1.36.2-10.el8.aarch64.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-mtp-1.36.2-10.el8.aarch64.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-smb-1.36.2-10.el8.aarch64.rpm gvfs-smb-debuginfo-1.36.2-10.el8.aarch64.rpm libsoup-debuginfo-2.62.3-2.el8.aarch64.rpm libsoup-debugsource-2.62.3-2.el8.aarch64.rpm libsoup-devel-2.62.3-2.el8.aarch64.rpm mutter-3.32.2-48.el8.aarch64.rpm mutter-debuginfo-3.32.2-48.el8.aarch64.rpm mutter-debugsource-3.32.2-48.el8.aarch64.rpm mutter-tests-debuginfo-3.32.2-48.el8.aarch64.rpm nautilus-3.28.1-14.el8.aarch64.rpm nautilus-debuginfo-3.28.1-14.el8.aarch64.rpm nautilus-debugsource-3.28.1-14.el8.aarch64.rpm nautilus-extensions-3.28.1-14.el8.aarch64.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.aarch64.rpm pipewire-0.3.6-1.el8.aarch64.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-debugsource-0.3.6-1.el8.aarch64.rpm pipewire-devel-0.3.6-1.el8.aarch64.rpm pipewire-doc-0.3.6-1.el8.aarch64.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-libs-0.3.6-1.el8.aarch64.rpm pipewire-libs-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-utils-0.3.6-1.el8.aarch64.rpm pipewire-utils-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire0.2-debugsource-0.2.7-6.el8.aarch64.rpm pipewire0.2-devel-0.2.7-6.el8.aarch64.rpm pipewire0.2-libs-0.2.7-6.el8.aarch64.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.aarch64.rpm potrace-1.15-3.el8.aarch64.rpm potrace-debuginfo-1.15-3.el8.aarch64.rpm potrace-debugsource-1.15-3.el8.aarch64.rpm pygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm pygobject3-debugsource-3.28.3-2.el8.aarch64.rpm python3-gobject-3.28.3-2.el8.aarch64.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm python3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm tracker-2.1.5-2.el8.aarch64.rpm tracker-debuginfo-2.1.5-2.el8.aarch64.rpm tracker-debugsource-2.1.5-2.el8.aarch64.rpm vte-profile-0.52.4-2.el8.aarch64.rpm vte291-0.52.4-2.el8.aarch64.rpm vte291-debuginfo-0.52.4-2.el8.aarch64.rpm vte291-debugsource-0.52.4-2.el8.aarch64.rpm vte291-devel-debuginfo-0.52.4-2.el8.aarch64.rpm webkit2gtk3-2.28.4-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.28.4-1.el8.aarch64.rpm webkit2gtk3-devel-2.28.4-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.aarch64.rpm webrtc-audio-processing-0.3-9.el8.aarch64.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.aarch64.rpm webrtc-audio-processing-debugsource-0.3-9.el8.aarch64.rpm xdg-desktop-portal-1.6.0-2.el8.aarch64.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.aarch64.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.aarch64.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.aarch64.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.aarch64.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.aarch64.rpm

noarch: gnome-classic-session-3.32.1-11.el8.noarch.rpm gnome-control-center-filesystem-3.28.2-22.el8.noarch.rpm gnome-shell-extension-apps-menu-3.32.1-11.el8.noarch.rpm gnome-shell-extension-auto-move-windows-3.32.1-11.el8.noarch.rpm gnome-shell-extension-common-3.32.1-11.el8.noarch.rpm gnome-shell-extension-dash-to-dock-3.32.1-11.el8.noarch.rpm gnome-shell-extension-desktop-icons-3.32.1-11.el8.noarch.rpm gnome-shell-extension-disable-screenshield-3.32.1-11.el8.noarch.rpm gnome-shell-extension-drive-menu-3.32.1-11.el8.noarch.rpm gnome-shell-extension-horizontal-workspaces-3.32.1-11.el8.noarch.rpm gnome-shell-extension-launch-new-instance-3.32.1-11.el8.noarch.rpm gnome-shell-extension-native-window-placement-3.32.1-11.el8.noarch.rpm gnome-shell-extension-no-hot-corner-3.32.1-11.el8.noarch.rpm gnome-shell-extension-panel-favorites-3.32.1-11.el8.noarch.rpm gnome-shell-extension-places-menu-3.32.1-11.el8.noarch.rpm gnome-shell-extension-screenshot-window-sizer-3.32.1-11.el8.noarch.rpm gnome-shell-extension-systemMonitor-3.32.1-11.el8.noarch.rpm gnome-shell-extension-top-icons-3.32.1-11.el8.noarch.rpm gnome-shell-extension-updates-dialog-3.32.1-11.el8.noarch.rpm gnome-shell-extension-user-theme-3.32.1-11.el8.noarch.rpm gnome-shell-extension-window-grouper-3.32.1-11.el8.noarch.rpm gnome-shell-extension-window-list-3.32.1-11.el8.noarch.rpm gnome-shell-extension-windowsNavigator-3.32.1-11.el8.noarch.rpm gnome-shell-extension-workspace-indicator-3.32.1-11.el8.noarch.rpm

ppc64le: LibRaw-0.19.5-2.el8.ppc64le.rpm LibRaw-debuginfo-0.19.5-2.el8.ppc64le.rpm LibRaw-debugsource-0.19.5-2.el8.ppc64le.rpm LibRaw-samples-debuginfo-0.19.5-2.el8.ppc64le.rpm PackageKit-1.1.12-6.el8.ppc64le.rpm PackageKit-command-not-found-1.1.12-6.el8.ppc64le.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-cron-1.1.12-6.el8.ppc64le.rpm PackageKit-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-debugsource-1.1.12-6.el8.ppc64le.rpm PackageKit-glib-1.1.12-6.el8.ppc64le.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.ppc64le.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-gtk3-module-1.1.12-6.el8.ppc64le.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.ppc64le.rpm dleyna-renderer-0.6.0-3.el8.ppc64le.rpm dleyna-renderer-debuginfo-0.6.0-3.el8.ppc64le.rpm dleyna-renderer-debugsource-0.6.0-3.el8.ppc64le.rpm frei0r-plugins-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-debugsource-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-opencv-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.ppc64le.rpm gdm-3.28.3-34.el8.ppc64le.rpm gdm-debuginfo-3.28.3-34.el8.ppc64le.rpm gdm-debugsource-3.28.3-34.el8.ppc64le.rpm gnome-control-center-3.28.2-22.el8.ppc64le.rpm gnome-control-center-debuginfo-3.28.2-22.el8.ppc64le.rpm gnome-control-center-debugsource-3.28.2-22.el8.ppc64le.rpm gnome-photos-3.28.1-3.el8.ppc64le.rpm gnome-photos-debuginfo-3.28.1-3.el8.ppc64le.rpm gnome-photos-debugsource-3.28.1-3.el8.ppc64le.rpm gnome-photos-tests-3.28.1-3.el8.ppc64le.rpm gnome-remote-desktop-0.1.8-3.el8.ppc64le.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.ppc64le.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.ppc64le.rpm gnome-session-3.28.1-10.el8.ppc64le.rpm gnome-session-debuginfo-3.28.1-10.el8.ppc64le.rpm gnome-session-debugsource-3.28.1-10.el8.ppc64le.rpm gnome-session-wayland-session-3.28.1-10.el8.ppc64le.rpm gnome-session-xsession-3.28.1-10.el8.ppc64le.rpm gnome-settings-daemon-3.32.0-11.el8.ppc64le.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.ppc64le.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.ppc64le.rpm gnome-shell-3.32.2-20.el8.ppc64le.rpm gnome-shell-debuginfo-3.32.2-20.el8.ppc64le.rpm gnome-shell-debugsource-3.32.2-20.el8.ppc64le.rpm gnome-terminal-3.28.3-2.el8.ppc64le.rpm gnome-terminal-debuginfo-3.28.3-2.el8.ppc64le.rpm gnome-terminal-debugsource-3.28.3-2.el8.ppc64le.rpm gnome-terminal-nautilus-3.28.3-2.el8.ppc64le.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.ppc64le.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.ppc64le.rpm gtk-update-icon-cache-3.22.30-6.el8.ppc64le.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-3.22.30-6.el8.ppc64le.rpm gtk3-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-debugsource-3.22.30-6.el8.ppc64le.rpm gtk3-devel-3.22.30-6.el8.ppc64le.rpm gtk3-devel-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-immodule-xim-3.22.30-6.el8.ppc64le.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-tests-debuginfo-3.22.30-6.el8.ppc64le.rpm gvfs-1.36.2-10.el8.ppc64le.rpm gvfs-afc-1.36.2-10.el8.ppc64le.rpm gvfs-afc-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-afp-1.36.2-10.el8.ppc64le.rpm gvfs-afp-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-archive-1.36.2-10.el8.ppc64le.rpm gvfs-archive-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-client-1.36.2-10.el8.ppc64le.rpm gvfs-client-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-debugsource-1.36.2-10.el8.ppc64le.rpm gvfs-devel-1.36.2-10.el8.ppc64le.rpm gvfs-fuse-1.36.2-10.el8.ppc64le.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-goa-1.36.2-10.el8.ppc64le.rpm gvfs-goa-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-gphoto2-1.36.2-10.el8.ppc64le.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-mtp-1.36.2-10.el8.ppc64le.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-smb-1.36.2-10.el8.ppc64le.rpm gvfs-smb-debuginfo-1.36.2-10.el8.ppc64le.rpm libsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm libsoup-debugsource-2.62.3-2.el8.ppc64le.rpm libsoup-devel-2.62.3-2.el8.ppc64le.rpm mutter-3.32.2-48.el8.ppc64le.rpm mutter-debuginfo-3.32.2-48.el8.ppc64le.rpm mutter-debugsource-3.32.2-48.el8.ppc64le.rpm mutter-tests-debuginfo-3.32.2-48.el8.ppc64le.rpm nautilus-3.28.1-14.el8.ppc64le.rpm nautilus-debuginfo-3.28.1-14.el8.ppc64le.rpm nautilus-debugsource-3.28.1-14.el8.ppc64le.rpm nautilus-extensions-3.28.1-14.el8.ppc64le.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.ppc64le.rpm pipewire-0.3.6-1.el8.ppc64le.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-debugsource-0.3.6-1.el8.ppc64le.rpm pipewire-devel-0.3.6-1.el8.ppc64le.rpm pipewire-doc-0.3.6-1.el8.ppc64le.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-libs-0.3.6-1.el8.ppc64le.rpm pipewire-libs-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-utils-0.3.6-1.el8.ppc64le.rpm pipewire-utils-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire0.2-debugsource-0.2.7-6.el8.ppc64le.rpm pipewire0.2-devel-0.2.7-6.el8.ppc64le.rpm pipewire0.2-libs-0.2.7-6.el8.ppc64le.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.ppc64le.rpm potrace-1.15-3.el8.ppc64le.rpm potrace-debuginfo-1.15-3.el8.ppc64le.rpm potrace-debugsource-1.15-3.el8.ppc64le.rpm pygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm pygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm python3-gobject-3.28.3-2.el8.ppc64le.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm python3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm tracker-2.1.5-2.el8.ppc64le.rpm tracker-debuginfo-2.1.5-2.el8.ppc64le.rpm tracker-debugsource-2.1.5-2.el8.ppc64le.rpm vte-profile-0.52.4-2.el8.ppc64le.rpm vte291-0.52.4-2.el8.ppc64le.rpm vte291-debuginfo-0.52.4-2.el8.ppc64le.rpm vte291-debugsource-0.52.4-2.el8.ppc64le.rpm vte291-devel-debuginfo-0.52.4-2.el8.ppc64le.rpm webkit2gtk3-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-devel-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm webrtc-audio-processing-0.3-9.el8.ppc64le.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.ppc64le.rpm webrtc-audio-processing-debugsource-0.3-9.el8.ppc64le.rpm xdg-desktop-portal-1.6.0-2.el8.ppc64le.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.ppc64le.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.ppc64le.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.ppc64le.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.ppc64le.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.ppc64le.rpm

s390x: PackageKit-1.1.12-6.el8.s390x.rpm PackageKit-command-not-found-1.1.12-6.el8.s390x.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-cron-1.1.12-6.el8.s390x.rpm PackageKit-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-debugsource-1.1.12-6.el8.s390x.rpm PackageKit-glib-1.1.12-6.el8.s390x.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.s390x.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-gtk3-module-1.1.12-6.el8.s390x.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.s390x.rpm frei0r-plugins-1.6.1-7.el8.s390x.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.s390x.rpm frei0r-plugins-debugsource-1.6.1-7.el8.s390x.rpm frei0r-plugins-opencv-1.6.1-7.el8.s390x.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.s390x.rpm gdm-3.28.3-34.el8.s390x.rpm gdm-debuginfo-3.28.3-34.el8.s390x.rpm gdm-debugsource-3.28.3-34.el8.s390x.rpm gnome-control-center-3.28.2-22.el8.s390x.rpm gnome-control-center-debuginfo-3.28.2-22.el8.s390x.rpm gnome-control-center-debugsource-3.28.2-22.el8.s390x.rpm gnome-remote-desktop-0.1.8-3.el8.s390x.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.s390x.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.s390x.rpm gnome-session-3.28.1-10.el8.s390x.rpm gnome-session-debuginfo-3.28.1-10.el8.s390x.rpm gnome-session-debugsource-3.28.1-10.el8.s390x.rpm gnome-session-wayland-session-3.28.1-10.el8.s390x.rpm gnome-session-xsession-3.28.1-10.el8.s390x.rpm gnome-settings-daemon-3.32.0-11.el8.s390x.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.s390x.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.s390x.rpm gnome-shell-3.32.2-20.el8.s390x.rpm gnome-shell-debuginfo-3.32.2-20.el8.s390x.rpm gnome-shell-debugsource-3.32.2-20.el8.s390x.rpm gnome-terminal-3.28.3-2.el8.s390x.rpm gnome-terminal-debuginfo-3.28.3-2.el8.s390x.rpm gnome-terminal-debugsource-3.28.3-2.el8.s390x.rpm gnome-terminal-nautilus-3.28.3-2.el8.s390x.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.s390x.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.s390x.rpm gtk-update-icon-cache-3.22.30-6.el8.s390x.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-3.22.30-6.el8.s390x.rpm gtk3-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-debugsource-3.22.30-6.el8.s390x.rpm gtk3-devel-3.22.30-6.el8.s390x.rpm gtk3-devel-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-immodule-xim-3.22.30-6.el8.s390x.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-tests-debuginfo-3.22.30-6.el8.s390x.rpm gvfs-1.36.2-10.el8.s390x.rpm gvfs-afp-1.36.2-10.el8.s390x.rpm gvfs-afp-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-archive-1.36.2-10.el8.s390x.rpm gvfs-archive-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-client-1.36.2-10.el8.s390x.rpm gvfs-client-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-debugsource-1.36.2-10.el8.s390x.rpm gvfs-devel-1.36.2-10.el8.s390x.rpm gvfs-fuse-1.36.2-10.el8.s390x.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-goa-1.36.2-10.el8.s390x.rpm gvfs-goa-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-gphoto2-1.36.2-10.el8.s390x.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-mtp-1.36.2-10.el8.s390x.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-smb-1.36.2-10.el8.s390x.rpm gvfs-smb-debuginfo-1.36.2-10.el8.s390x.rpm libsoup-debuginfo-2.62.3-2.el8.s390x.rpm libsoup-debugsource-2.62.3-2.el8.s390x.rpm libsoup-devel-2.62.3-2.el8.s390x.rpm mutter-3.32.2-48.el8.s390x.rpm mutter-debuginfo-3.32.2-48.el8.s390x.rpm mutter-debugsource-3.32.2-48.el8.s390x.rpm mutter-tests-debuginfo-3.32.2-48.el8.s390x.rpm nautilus-3.28.1-14.el8.s390x.rpm nautilus-debuginfo-3.28.1-14.el8.s390x.rpm nautilus-debugsource-3.28.1-14.el8.s390x.rpm nautilus-extensions-3.28.1-14.el8.s390x.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.s390x.rpm pipewire-0.3.6-1.el8.s390x.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-debugsource-0.3.6-1.el8.s390x.rpm pipewire-devel-0.3.6-1.el8.s390x.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-libs-0.3.6-1.el8.s390x.rpm pipewire-libs-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-utils-0.3.6-1.el8.s390x.rpm pipewire-utils-debuginfo-0.3.6-1.el8.s390x.rpm pipewire0.2-debugsource-0.2.7-6.el8.s390x.rpm pipewire0.2-devel-0.2.7-6.el8.s390x.rpm pipewire0.2-libs-0.2.7-6.el8.s390x.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.s390x.rpm potrace-1.15-3.el8.s390x.rpm potrace-debuginfo-1.15-3.el8.s390x.rpm potrace-debugsource-1.15-3.el8.s390x.rpm pygobject3-debuginfo-3.28.3-2.el8.s390x.rpm pygobject3-debugsource-3.28.3-2.el8.s390x.rpm python3-gobject-3.28.3-2.el8.s390x.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm python3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm tracker-2.1.5-2.el8.s390x.rpm tracker-debuginfo-2.1.5-2.el8.s390x.rpm tracker-debugsource-2.1.5-2.el8.s390x.rpm vte-profile-0.52.4-2.el8.s390x.rpm vte291-0.52.4-2.el8.s390x.rpm vte291-debuginfo-0.52.4-2.el8.s390x.rpm vte291-debugsource-0.52.4-2.el8.s390x.rpm vte291-devel-debuginfo-0.52.4-2.el8.s390x.rpm webkit2gtk3-2.28.4-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.s390x.rpm webkit2gtk3-debugsource-2.28.4-1.el8.s390x.rpm webkit2gtk3-devel-2.28.4-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.s390x.rpm webrtc-audio-processing-0.3-9.el8.s390x.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.s390x.rpm webrtc-audio-processing-debugsource-0.3-9.el8.s390x.rpm xdg-desktop-portal-1.6.0-2.el8.s390x.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.s390x.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.s390x.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.s390x.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.s390x.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.s390x.rpm

x86_64: LibRaw-0.19.5-2.el8.i686.rpm LibRaw-0.19.5-2.el8.x86_64.rpm LibRaw-debuginfo-0.19.5-2.el8.i686.rpm LibRaw-debuginfo-0.19.5-2.el8.x86_64.rpm LibRaw-debugsource-0.19.5-2.el8.i686.rpm LibRaw-debugsource-0.19.5-2.el8.x86_64.rpm LibRaw-samples-debuginfo-0.19.5-2.el8.i686.rpm LibRaw-samples-debuginfo-0.19.5-2.el8.x86_64.rpm PackageKit-1.1.12-6.el8.x86_64.rpm PackageKit-command-not-found-1.1.12-6.el8.x86_64.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-cron-1.1.12-6.el8.x86_64.rpm PackageKit-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-debugsource-1.1.12-6.el8.i686.rpm PackageKit-debugsource-1.1.12-6.el8.x86_64.rpm PackageKit-glib-1.1.12-6.el8.i686.rpm PackageKit-glib-1.1.12-6.el8.x86_64.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.x86_64.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-gtk3-module-1.1.12-6.el8.i686.rpm PackageKit-gtk3-module-1.1.12-6.el8.x86_64.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.x86_64.rpm dleyna-renderer-0.6.0-3.el8.x86_64.rpm dleyna-renderer-debuginfo-0.6.0-3.el8.x86_64.rpm dleyna-renderer-debugsource-0.6.0-3.el8.x86_64.rpm frei0r-plugins-1.6.1-7.el8.x86_64.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.x86_64.rpm frei0r-plugins-debugsource-1.6.1-7.el8.x86_64.rpm frei0r-plugins-opencv-1.6.1-7.el8.x86_64.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.x86_64.rpm gdm-3.28.3-34.el8.i686.rpm gdm-3.28.3-34.el8.x86_64.rpm gdm-debuginfo-3.28.3-34.el8.i686.rpm gdm-debuginfo-3.28.3-34.el8.x86_64.rpm gdm-debugsource-3.28.3-34.el8.i686.rpm gdm-debugsource-3.28.3-34.el8.x86_64.rpm gnome-control-center-3.28.2-22.el8.x86_64.rpm gnome-control-center-debuginfo-3.28.2-22.el8.x86_64.rpm gnome-control-center-debugsource-3.28.2-22.el8.x86_64.rpm gnome-photos-3.28.1-3.el8.x86_64.rpm gnome-photos-debuginfo-3.28.1-3.el8.x86_64.rpm gnome-photos-debugsource-3.28.1-3.el8.x86_64.rpm gnome-photos-tests-3.28.1-3.el8.x86_64.rpm gnome-remote-desktop-0.1.8-3.el8.x86_64.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.x86_64.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.x86_64.rpm gnome-session-3.28.1-10.el8.x86_64.rpm gnome-session-debuginfo-3.28.1-10.el8.x86_64.rpm gnome-session-debugsource-3.28.1-10.el8.x86_64.rpm gnome-session-wayland-session-3.28.1-10.el8.x86_64.rpm gnome-session-xsession-3.28.1-10.el8.x86_64.rpm gnome-settings-daemon-3.32.0-11.el8.x86_64.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.x86_64.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.x86_64.rpm gnome-shell-3.32.2-20.el8.x86_64.rpm gnome-shell-debuginfo-3.32.2-20.el8.x86_64.rpm gnome-shell-debugsource-3.32.2-20.el8.x86_64.rpm gnome-terminal-3.28.3-2.el8.x86_64.rpm gnome-terminal-debuginfo-3.28.3-2.el8.x86_64.rpm gnome-terminal-debugsource-3.28.3-2.el8.x86_64.rpm gnome-terminal-nautilus-3.28.3-2.el8.x86_64.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.x86_64.rpm gsettings-desktop-schemas-3.32.0-5.el8.i686.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.i686.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.x86_64.rpm gtk-update-icon-cache-3.22.30-6.el8.x86_64.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.i686.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-3.22.30-6.el8.i686.rpm gtk3-3.22.30-6.el8.x86_64.rpm gtk3-debuginfo-3.22.30-6.el8.i686.rpm gtk3-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-debugsource-3.22.30-6.el8.i686.rpm gtk3-debugsource-3.22.30-6.el8.x86_64.rpm gtk3-devel-3.22.30-6.el8.i686.rpm gtk3-devel-3.22.30-6.el8.x86_64.rpm gtk3-devel-debuginfo-3.22.30-6.el8.i686.rpm gtk3-devel-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-immodule-xim-3.22.30-6.el8.x86_64.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.i686.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.i686.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-tests-debuginfo-3.22.30-6.el8.i686.rpm gtk3-tests-debuginfo-3.22.30-6.el8.x86_64.rpm gvfs-1.36.2-10.el8.x86_64.rpm gvfs-afc-1.36.2-10.el8.x86_64.rpm gvfs-afc-debuginfo-1.36.2-10.el8.i686.rpm gvfs-afc-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-afp-1.36.2-10.el8.x86_64.rpm gvfs-afp-debuginfo-1.36.2-10.el8.i686.rpm gvfs-afp-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-archive-1.36.2-10.el8.x86_64.rpm gvfs-archive-debuginfo-1.36.2-10.el8.i686.rpm gvfs-archive-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-client-1.36.2-10.el8.i686.rpm gvfs-client-1.36.2-10.el8.x86_64.rpm gvfs-client-debuginfo-1.36.2-10.el8.i686.rpm gvfs-client-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-debuginfo-1.36.2-10.el8.i686.rpm gvfs-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-debugsource-1.36.2-10.el8.i686.rpm gvfs-debugsource-1.36.2-10.el8.x86_64.rpm gvfs-devel-1.36.2-10.el8.i686.rpm gvfs-devel-1.36.2-10.el8.x86_64.rpm gvfs-fuse-1.36.2-10.el8.x86_64.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.i686.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-goa-1.36.2-10.el8.x86_64.rpm gvfs-goa-debuginfo-1.36.2-10.el8.i686.rpm gvfs-goa-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-gphoto2-1.36.2-10.el8.x86_64.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.i686.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-mtp-1.36.2-10.el8.x86_64.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.i686.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-smb-1.36.2-10.el8.x86_64.rpm gvfs-smb-debuginfo-1.36.2-10.el8.i686.rpm gvfs-smb-debuginfo-1.36.2-10.el8.x86_64.rpm libsoup-debuginfo-2.62.3-2.el8.i686.rpm libsoup-debuginfo-2.62.3-2.el8.x86_64.rpm libsoup-debugsource-2.62.3-2.el8.i686.rpm libsoup-debugsource-2.62.3-2.el8.x86_64.rpm libsoup-devel-2.62.3-2.el8.i686.rpm libsoup-devel-2.62.3-2.el8.x86_64.rpm mutter-3.32.2-48.el8.i686.rpm mutter-3.32.2-48.el8.x86_64.rpm mutter-debuginfo-3.32.2-48.el8.i686.rpm mutter-debuginfo-3.32.2-48.el8.x86_64.rpm mutter-debugsource-3.32.2-48.el8.i686.rpm mutter-debugsource-3.32.2-48.el8.x86_64.rpm mutter-tests-debuginfo-3.32.2-48.el8.i686.rpm mutter-tests-debuginfo-3.32.2-48.el8.x86_64.rpm nautilus-3.28.1-14.el8.x86_64.rpm nautilus-debuginfo-3.28.1-14.el8.i686.rpm nautilus-debuginfo-3.28.1-14.el8.x86_64.rpm nautilus-debugsource-3.28.1-14.el8.i686.rpm nautilus-debugsource-3.28.1-14.el8.x86_64.rpm nautilus-extensions-3.28.1-14.el8.i686.rpm nautilus-extensions-3.28.1-14.el8.x86_64.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.i686.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.x86_64.rpm pipewire-0.3.6-1.el8.i686.rpm pipewire-0.3.6-1.el8.x86_64.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.i686.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-debuginfo-0.3.6-1.el8.i686.rpm pipewire-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-debugsource-0.3.6-1.el8.i686.rpm pipewire-debugsource-0.3.6-1.el8.x86_64.rpm pipewire-devel-0.3.6-1.el8.i686.rpm pipewire-devel-0.3.6-1.el8.x86_64.rpm pipewire-doc-0.3.6-1.el8.x86_64.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.i686.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-libs-0.3.6-1.el8.i686.rpm pipewire-libs-0.3.6-1.el8.x86_64.rpm pipewire-libs-debuginfo-0.3.6-1.el8.i686.rpm pipewire-libs-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-utils-0.3.6-1.el8.x86_64.rpm pipewire-utils-debuginfo-0.3.6-1.el8.i686.rpm pipewire-utils-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire0.2-debugsource-0.2.7-6.el8.i686.rpm pipewire0.2-debugsource-0.2.7-6.el8.x86_64.rpm pipewire0.2-devel-0.2.7-6.el8.i686.rpm pipewire0.2-devel-0.2.7-6.el8.x86_64.rpm pipewire0.2-libs-0.2.7-6.el8.i686.rpm pipewire0.2-libs-0.2.7-6.el8.x86_64.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.i686.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.x86_64.rpm potrace-1.15-3.el8.i686.rpm potrace-1.15-3.el8.x86_64.rpm potrace-debuginfo-1.15-3.el8.i686.rpm potrace-debuginfo-1.15-3.el8.x86_64.rpm potrace-debugsource-1.15-3.el8.i686.rpm potrace-debugsource-1.15-3.el8.x86_64.rpm pygobject3-debuginfo-3.28.3-2.el8.i686.rpm pygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm pygobject3-debugsource-3.28.3-2.el8.i686.rpm pygobject3-debugsource-3.28.3-2.el8.x86_64.rpm python3-gobject-3.28.3-2.el8.i686.rpm python3-gobject-3.28.3-2.el8.x86_64.rpm python3-gobject-base-3.28.3-2.el8.i686.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.i686.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm python3-gobject-debuginfo-3.28.3-2.el8.i686.rpm python3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm tracker-2.1.5-2.el8.i686.rpm tracker-2.1.5-2.el8.x86_64.rpm tracker-debuginfo-2.1.5-2.el8.i686.rpm tracker-debuginfo-2.1.5-2.el8.x86_64.rpm tracker-debugsource-2.1.5-2.el8.i686.rpm tracker-debugsource-2.1.5-2.el8.x86_64.rpm vte-profile-0.52.4-2.el8.x86_64.rpm vte291-0.52.4-2.el8.i686.rpm vte291-0.52.4-2.el8.x86_64.rpm vte291-debuginfo-0.52.4-2.el8.i686.rpm vte291-debuginfo-0.52.4-2.el8.x86_64.rpm vte291-debugsource-0.52.4-2.el8.i686.rpm vte291-debugsource-0.52.4-2.el8.x86_64.rpm vte291-devel-debuginfo-0.52.4-2.el8.i686.rpm vte291-devel-debuginfo-0.52.4-2.el8.x86_64.rpm webkit2gtk3-2.28.4-1.el8.i686.rpm webkit2gtk3-2.28.4-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.28.4-1.el8.i686.rpm webkit2gtk3-debugsource-2.28.4-1.el8.x86_64.rpm webkit2gtk3-devel-2.28.4-1.el8.i686.rpm webkit2gtk3-devel-2.28.4-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.x86_64.rpm webrtc-audio-processing-0.3-9.el8.i686.rpm webrtc-audio-processing-0.3-9.el8.x86_64.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.i686.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.x86_64.rpm webrtc-audio-processing-debugsource-0.3-9.el8.i686.rpm webrtc-audio-processing-debugsource-0.3-9.el8.x86_64.rpm xdg-desktop-portal-1.6.0-2.el8.x86_64.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.x86_64.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.x86_64.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.x86_64.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.x86_64.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.x86_64.rpm

Red Hat Enterprise Linux BaseOS (v. 8):

Source: gsettings-desktop-schemas-3.32.0-5.el8.src.rpm libsoup-2.62.3-2.el8.src.rpm pygobject3-3.28.3-2.el8.src.rpm

aarch64: gsettings-desktop-schemas-3.32.0-5.el8.aarch64.rpm libsoup-2.62.3-2.el8.aarch64.rpm libsoup-debuginfo-2.62.3-2.el8.aarch64.rpm libsoup-debugsource-2.62.3-2.el8.aarch64.rpm pygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm pygobject3-debugsource-3.28.3-2.el8.aarch64.rpm python3-gobject-base-3.28.3-2.el8.aarch64.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm python3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm

ppc64le: gsettings-desktop-schemas-3.32.0-5.el8.ppc64le.rpm libsoup-2.62.3-2.el8.ppc64le.rpm libsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm libsoup-debugsource-2.62.3-2.el8.ppc64le.rpm pygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm pygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm python3-gobject-base-3.28.3-2.el8.ppc64le.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm python3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm

s390x: gsettings-desktop-schemas-3.32.0-5.el8.s390x.rpm libsoup-2.62.3-2.el8.s390x.rpm libsoup-debuginfo-2.62.3-2.el8.s390x.rpm libsoup-debugsource-2.62.3-2.el8.s390x.rpm pygobject3-debuginfo-3.28.3-2.el8.s390x.rpm pygobject3-debugsource-3.28.3-2.el8.s390x.rpm python3-gobject-base-3.28.3-2.el8.s390x.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm python3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm

x86_64: gsettings-desktop-schemas-3.32.0-5.el8.x86_64.rpm libsoup-2.62.3-2.el8.i686.rpm libsoup-2.62.3-2.el8.x86_64.rpm libsoup-debuginfo-2.62.3-2.el8.i686.rpm libsoup-debuginfo-2.62.3-2.el8.x86_64.rpm libsoup-debugsource-2.62.3-2.el8.i686.rpm libsoup-debugsource-2.62.3-2.el8.x86_64.rpm pygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm pygobject3-debugsource-3.28.3-2.el8.x86_64.rpm python3-gobject-base-3.28.3-2.el8.x86_64.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm python3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm

Red Hat CodeReady Linux Builder (v. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202003-22


                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202002-1478",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.5"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.4"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.17"
      },
      {
        "model": "leap",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "opensuse",
        "version": "15.1"
      },
      {
        "model": "icloud",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (ipod touch \u7b2c 7 \u4e16\u4ee3)"
      },
      {
        "model": "leap",
        "scope": null,
        "trust": 0.8,
        "vendor": "opensuse",
        "version": null
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (iphone 6s \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.5 \u672a\u6e80 (macos mojave)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.5 \u672a\u6e80 (macos high sierra)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (ipad air 2 \u4ee5\u964d)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.17 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (apple tv hd)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (apple tv 4k)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (ipad mini 4 \u4ee5\u964d)"
      },
      {
        "model": "itunes",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 12.10.4 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.5 \u672a\u6e80 (macos catalina)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 10.9.2 \u672a\u6e80 (windows 10 \u4ee5\u964d)"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/o:opensuse_project:leap",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2020-3865",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-3865",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Medium",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 6.8,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-002349",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-181990",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2020-3865",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-002349",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-3865",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-002349",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202001-1411",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-181990",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-3865",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. WebKit Page Loading is one of the page loading components. Apple tvOS is a smart TV operating system. The product supports storage of music, photos, App and contacts, etc. A security vulnerability exists in the WebKit Page Loading component in several Apple products. The following products and versions are affected: Windows-based iCloud versions prior to 10.9.2 and 7.17; Windows-based iTunes versions prior to 12.10.4; Apple tvOS versions prior to 13.3.1; Safari versions prior to 13.0.5. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID:       RHSA-2022:0056-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0056\nIssue date:        2022-03-10\nCVE Names:         CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n                   CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n                   CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n                   CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n                   CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n                   CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n                   CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n                   CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n                   CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n                   CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n                   CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n                   CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n                   CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n                   CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n                   CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n                   CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n                   CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n                   CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n                   CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n                   CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n                   CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n                   CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n                   CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027  is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027  is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\"  in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027  is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi  after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal]  The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs -  fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log  of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI  OSP16 OVN  IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027  ENV  which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI  start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation]  add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws  bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer:  ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on  self._get_vip_port(loadbalancer).id   \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10]  sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack  (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure -  Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image  Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list  and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the  deleted EgressIP object got  InvalidEgressIP  warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD:  prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) -  no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server  makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN  Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain  olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking -  networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR  is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to  attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nGNOME is the default desktop environment of Red Hat Enterprise Linux. \n\nThe following packages have been upgraded to a later upstream version:\ngnome-remote-desktop (0.1.8), pipewire (0.3.6), vte291 (0.52.4),\nwebkit2gtk3 (2.28.4), xdg-desktop-portal (1.6.0), xdg-desktop-portal-gtk\n(1.6.0). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.3 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nGDM must be restarted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):\n\n1207179 - Select items matching non existing pattern does not unselect already selected\n1566027 - can\u0027t correctly compute contents size if hidden files are included\n1569868 - Browsing samba shares using gvfs is very slow\n1652178 - [RFE] perf-tool run on wayland\n1656262 - The terminal\u0027s character display is unclear on rhel8 guest after installing gnome\n1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled\n1692536 - login screen shows after gnome-initial-setup\n1706008 - Sound Effect sometimes fails to change to selected option. \n1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. \n1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined\n1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly\n1758891 - tracker-devel subpackage missing from el8 repos\n1775345 - Rebase xdg-desktop-portal to 1.6\n1778579 - Nautilus does not respect umask settings. \n1779691 - Rebase xdg-desktop-portal-gtk to 1.6\n1794045 - There are two different high contrast versions of desktop icons\n1804719 - Update vte291 to 0.52.4\n1805929 - RHEL 8.1 gnome-shell-extension errors\n1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp\n1814820 - No checkbox to install updates in the shutdown dialog\n1816070 - \"search for an application to open this file\" dialog broken\n1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution\n1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1817143 - Rebase WebKitGTK to 2.28\n1820759 - Include IO stall fixes\n1820760 - Include IO fixes\n1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening\n1827030 - gnome-settings-daemon: subscription notification on CentOS Stream\n1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content\n1832347 - [Rebase] Rebase pipewire to 0.3.x\n1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install\n1837381 - Backport screen cast improvements to 8.3\n1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version\n1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6\n1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113\n1840080 - Can not control top bar menus via keys in Wayland\n1840788 - [flatpak][rhel8] unable to build potrace as dependency\n1843486 - Software crash after clicking Updates tab\n1844578 - anaconda very rarely crashes at startup with a pygobject traceback\n1846191 - usb adapters hotplug crashes gnome-shell\n1847051 - JS ERROR: TypeError: area is null\n1847061 - File search doesn\u0027t work under certain locales\n1847062 - gnome-remote-desktop crash on QXL graphics\n1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV\n1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow\n1854734 - PipeWire 0.2 should be required by xdg-desktop-portal\n1866332 - Remove obsolete libusb-devel dependency\n1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at \"Started GNOME Display Manager\" - GDM regression issue. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nLibRaw-0.19.5-2.el8.src.rpm\nPackageKit-1.1.12-6.el8.src.rpm\ndleyna-renderer-0.6.0-3.el8.src.rpm\nfrei0r-plugins-1.6.1-7.el8.src.rpm\ngdm-3.28.3-34.el8.src.rpm\ngnome-control-center-3.28.2-22.el8.src.rpm\ngnome-photos-3.28.1-3.el8.src.rpm\ngnome-remote-desktop-0.1.8-3.el8.src.rpm\ngnome-session-3.28.1-10.el8.src.rpm\ngnome-settings-daemon-3.32.0-11.el8.src.rpm\ngnome-shell-3.32.2-20.el8.src.rpm\ngnome-shell-extensions-3.32.1-11.el8.src.rpm\ngnome-terminal-3.28.3-2.el8.src.rpm\ngtk3-3.22.30-6.el8.src.rpm\ngvfs-1.36.2-10.el8.src.rpm\nmutter-3.32.2-48.el8.src.rpm\nnautilus-3.28.1-14.el8.src.rpm\npipewire-0.3.6-1.el8.src.rpm\npipewire0.2-0.2.7-6.el8.src.rpm\npotrace-1.15-3.el8.src.rpm\ntracker-2.1.5-2.el8.src.rpm\nvte291-0.52.4-2.el8.src.rpm\nwebkit2gtk3-2.28.4-1.el8.src.rpm\nwebrtc-audio-processing-0.3-9.el8.src.rpm\nxdg-desktop-portal-1.6.0-2.el8.src.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.src.rpm\n\naarch64:\nPackageKit-1.1.12-6.el8.aarch64.rpm\nPackageKit-command-not-found-1.1.12-6.el8.aarch64.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-cron-1.1.12-6.el8.aarch64.rpm\nPackageKit-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-debugsource-1.1.12-6.el8.aarch64.rpm\nPackageKit-glib-1.1.12-6.el8.aarch64.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.aarch64.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.aarch64.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.aarch64.rpm\nfrei0r-plugins-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.aarch64.rpm\ngdm-3.28.3-34.el8.aarch64.rpm\ngdm-debuginfo-3.28.3-34.el8.aarch64.rpm\ngdm-debugsource-3.28.3-34.el8.aarch64.rpm\ngnome-control-center-3.28.2-22.el8.aarch64.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.aarch64.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.aarch64.rpm\ngnome-remote-desktop-0.1.8-3.el8.aarch64.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.aarch64.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.aarch64.rpm\ngnome-session-3.28.1-10.el8.aarch64.rpm\ngnome-session-debuginfo-3.28.1-10.el8.aarch64.rpm\ngnome-session-debugsource-3.28.1-10.el8.aarch64.rpm\ngnome-session-wayland-session-3.28.1-10.el8.aarch64.rpm\ngnome-session-xsession-3.28.1-10.el8.aarch64.rpm\ngnome-settings-daemon-3.32.0-11.el8.aarch64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.aarch64.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.aarch64.rpm\ngnome-shell-3.32.2-20.el8.aarch64.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.aarch64.rpm\ngnome-shell-debugsource-3.32.2-20.el8.aarch64.rpm\ngnome-terminal-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.aarch64.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.aarch64.rpm\ngtk-update-icon-cache-3.22.30-6.el8.aarch64.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-3.22.30-6.el8.aarch64.rpm\ngtk3-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-debugsource-3.22.30-6.el8.aarch64.rpm\ngtk3-devel-3.22.30-6.el8.aarch64.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-immodule-xim-3.22.30-6.el8.aarch64.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.aarch64.rpm\ngvfs-1.36.2-10.el8.aarch64.rpm\ngvfs-afc-1.36.2-10.el8.aarch64.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-afp-1.36.2-10.el8.aarch64.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-archive-1.36.2-10.el8.aarch64.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-client-1.36.2-10.el8.aarch64.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-debugsource-1.36.2-10.el8.aarch64.rpm\ngvfs-devel-1.36.2-10.el8.aarch64.rpm\ngvfs-fuse-1.36.2-10.el8.aarch64.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-goa-1.36.2-10.el8.aarch64.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-gphoto2-1.36.2-10.el8.aarch64.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-mtp-1.36.2-10.el8.aarch64.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-smb-1.36.2-10.el8.aarch64.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.aarch64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.aarch64.rpm\nlibsoup-debugsource-2.62.3-2.el8.aarch64.rpm\nlibsoup-devel-2.62.3-2.el8.aarch64.rpm\nmutter-3.32.2-48.el8.aarch64.rpm\nmutter-debuginfo-3.32.2-48.el8.aarch64.rpm\nmutter-debugsource-3.32.2-48.el8.aarch64.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.aarch64.rpm\nnautilus-3.28.1-14.el8.aarch64.rpm\nnautilus-debuginfo-3.28.1-14.el8.aarch64.rpm\nnautilus-debugsource-3.28.1-14.el8.aarch64.rpm\nnautilus-extensions-3.28.1-14.el8.aarch64.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.aarch64.rpm\npipewire-0.3.6-1.el8.aarch64.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-debugsource-0.3.6-1.el8.aarch64.rpm\npipewire-devel-0.3.6-1.el8.aarch64.rpm\npipewire-doc-0.3.6-1.el8.aarch64.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-libs-0.3.6-1.el8.aarch64.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-utils-0.3.6-1.el8.aarch64.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire0.2-debugsource-0.2.7-6.el8.aarch64.rpm\npipewire0.2-devel-0.2.7-6.el8.aarch64.rpm\npipewire0.2-libs-0.2.7-6.el8.aarch64.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.aarch64.rpm\npotrace-1.15-3.el8.aarch64.rpm\npotrace-debuginfo-1.15-3.el8.aarch64.rpm\npotrace-debugsource-1.15-3.el8.aarch64.rpm\npygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm\npygobject3-debugsource-3.28.3-2.el8.aarch64.rpm\npython3-gobject-3.28.3-2.el8.aarch64.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm\ntracker-2.1.5-2.el8.aarch64.rpm\ntracker-debuginfo-2.1.5-2.el8.aarch64.rpm\ntracker-debugsource-2.1.5-2.el8.aarch64.rpm\nvte-profile-0.52.4-2.el8.aarch64.rpm\nvte291-0.52.4-2.el8.aarch64.rpm\nvte291-debuginfo-0.52.4-2.el8.aarch64.rpm\nvte291-debugsource-0.52.4-2.el8.aarch64.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.aarch64.rpm\nwebkit2gtk3-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebrtc-audio-processing-0.3-9.el8.aarch64.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.aarch64.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.aarch64.rpm\nxdg-desktop-portal-1.6.0-2.el8.aarch64.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.aarch64.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.aarch64.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.aarch64.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.aarch64.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.aarch64.rpm\n\nnoarch:\ngnome-classic-session-3.32.1-11.el8.noarch.rpm\ngnome-control-center-filesystem-3.28.2-22.el8.noarch.rpm\ngnome-shell-extension-apps-menu-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-auto-move-windows-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-common-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-dash-to-dock-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-desktop-icons-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-disable-screenshield-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-drive-menu-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-horizontal-workspaces-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-launch-new-instance-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-native-window-placement-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-no-hot-corner-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-panel-favorites-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-places-menu-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-screenshot-window-sizer-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-systemMonitor-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-top-icons-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-updates-dialog-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-user-theme-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-window-grouper-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-window-list-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-windowsNavigator-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-workspace-indicator-3.32.1-11.el8.noarch.rpm\n\nppc64le:\nLibRaw-0.19.5-2.el8.ppc64le.rpm\nLibRaw-debuginfo-0.19.5-2.el8.ppc64le.rpm\nLibRaw-debugsource-0.19.5-2.el8.ppc64le.rpm\nLibRaw-samples-debuginfo-0.19.5-2.el8.ppc64le.rpm\nPackageKit-1.1.12-6.el8.ppc64le.rpm\nPackageKit-command-not-found-1.1.12-6.el8.ppc64le.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-cron-1.1.12-6.el8.ppc64le.rpm\nPackageKit-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-debugsource-1.1.12-6.el8.ppc64le.rpm\nPackageKit-glib-1.1.12-6.el8.ppc64le.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.ppc64le.rpm\ndleyna-renderer-0.6.0-3.el8.ppc64le.rpm\ndleyna-renderer-debuginfo-0.6.0-3.el8.ppc64le.rpm\ndleyna-renderer-debugsource-0.6.0-3.el8.ppc64le.rpm\nfrei0r-plugins-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.ppc64le.rpm\ngdm-3.28.3-34.el8.ppc64le.rpm\ngdm-debuginfo-3.28.3-34.el8.ppc64le.rpm\ngdm-debugsource-3.28.3-34.el8.ppc64le.rpm\ngnome-control-center-3.28.2-22.el8.ppc64le.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.ppc64le.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.ppc64le.rpm\ngnome-photos-3.28.1-3.el8.ppc64le.rpm\ngnome-photos-debuginfo-3.28.1-3.el8.ppc64le.rpm\ngnome-photos-debugsource-3.28.1-3.el8.ppc64le.rpm\ngnome-photos-tests-3.28.1-3.el8.ppc64le.rpm\ngnome-remote-desktop-0.1.8-3.el8.ppc64le.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.ppc64le.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.ppc64le.rpm\ngnome-session-3.28.1-10.el8.ppc64le.rpm\ngnome-session-debuginfo-3.28.1-10.el8.ppc64le.rpm\ngnome-session-debugsource-3.28.1-10.el8.ppc64le.rpm\ngnome-session-wayland-session-3.28.1-10.el8.ppc64le.rpm\ngnome-session-xsession-3.28.1-10.el8.ppc64le.rpm\ngnome-settings-daemon-3.32.0-11.el8.ppc64le.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.ppc64le.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.ppc64le.rpm\ngnome-shell-3.32.2-20.el8.ppc64le.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.ppc64le.rpm\ngnome-shell-debugsource-3.32.2-20.el8.ppc64le.rpm\ngnome-terminal-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.ppc64le.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.ppc64le.rpm\ngtk-update-icon-cache-3.22.30-6.el8.ppc64le.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-3.22.30-6.el8.ppc64le.rpm\ngtk3-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-debugsource-3.22.30-6.el8.ppc64le.rpm\ngtk3-devel-3.22.30-6.el8.ppc64le.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-immodule-xim-3.22.30-6.el8.ppc64le.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngvfs-1.36.2-10.el8.ppc64le.rpm\ngvfs-afc-1.36.2-10.el8.ppc64le.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-afp-1.36.2-10.el8.ppc64le.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-archive-1.36.2-10.el8.ppc64le.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-client-1.36.2-10.el8.ppc64le.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-debugsource-1.36.2-10.el8.ppc64le.rpm\ngvfs-devel-1.36.2-10.el8.ppc64le.rpm\ngvfs-fuse-1.36.2-10.el8.ppc64le.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-goa-1.36.2-10.el8.ppc64le.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-gphoto2-1.36.2-10.el8.ppc64le.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-mtp-1.36.2-10.el8.ppc64le.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-smb-1.36.2-10.el8.ppc64le.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.ppc64le.rpm\nlibsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm\nlibsoup-debugsource-2.62.3-2.el8.ppc64le.rpm\nlibsoup-devel-2.62.3-2.el8.ppc64le.rpm\nmutter-3.32.2-48.el8.ppc64le.rpm\nmutter-debuginfo-3.32.2-48.el8.ppc64le.rpm\nmutter-debugsource-3.32.2-48.el8.ppc64le.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.ppc64le.rpm\nnautilus-3.28.1-14.el8.ppc64le.rpm\nnautilus-debuginfo-3.28.1-14.el8.ppc64le.rpm\nnautilus-debugsource-3.28.1-14.el8.ppc64le.rpm\nnautilus-extensions-3.28.1-14.el8.ppc64le.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.ppc64le.rpm\npipewire-0.3.6-1.el8.ppc64le.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-debugsource-0.3.6-1.el8.ppc64le.rpm\npipewire-devel-0.3.6-1.el8.ppc64le.rpm\npipewire-doc-0.3.6-1.el8.ppc64le.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-libs-0.3.6-1.el8.ppc64le.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-utils-0.3.6-1.el8.ppc64le.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire0.2-debugsource-0.2.7-6.el8.ppc64le.rpm\npipewire0.2-devel-0.2.7-6.el8.ppc64le.rpm\npipewire0.2-libs-0.2.7-6.el8.ppc64le.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.ppc64le.rpm\npotrace-1.15-3.el8.ppc64le.rpm\npotrace-debuginfo-1.15-3.el8.ppc64le.rpm\npotrace-debugsource-1.15-3.el8.ppc64le.rpm\npygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm\npygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm\ntracker-2.1.5-2.el8.ppc64le.rpm\ntracker-debuginfo-2.1.5-2.el8.ppc64le.rpm\ntracker-debugsource-2.1.5-2.el8.ppc64le.rpm\nvte-profile-0.52.4-2.el8.ppc64le.rpm\nvte291-0.52.4-2.el8.ppc64le.rpm\nvte291-debuginfo-0.52.4-2.el8.ppc64le.rpm\nvte291-debugsource-0.52.4-2.el8.ppc64le.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.ppc64le.rpm\nwebkit2gtk3-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebrtc-audio-processing-0.3-9.el8.ppc64le.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.ppc64le.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.ppc64le.rpm\nxdg-desktop-portal-1.6.0-2.el8.ppc64le.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.ppc64le.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.ppc64le.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.ppc64le.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.ppc64le.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.ppc64le.rpm\n\ns390x:\nPackageKit-1.1.12-6.el8.s390x.rpm\nPackageKit-command-not-found-1.1.12-6.el8.s390x.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-cron-1.1.12-6.el8.s390x.rpm\nPackageKit-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-debugsource-1.1.12-6.el8.s390x.rpm\nPackageKit-glib-1.1.12-6.el8.s390x.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.s390x.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.s390x.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.s390x.rpm\nfrei0r-plugins-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.s390x.rpm\ngdm-3.28.3-34.el8.s390x.rpm\ngdm-debuginfo-3.28.3-34.el8.s390x.rpm\ngdm-debugsource-3.28.3-34.el8.s390x.rpm\ngnome-control-center-3.28.2-22.el8.s390x.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.s390x.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.s390x.rpm\ngnome-remote-desktop-0.1.8-3.el8.s390x.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.s390x.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.s390x.rpm\ngnome-session-3.28.1-10.el8.s390x.rpm\ngnome-session-debuginfo-3.28.1-10.el8.s390x.rpm\ngnome-session-debugsource-3.28.1-10.el8.s390x.rpm\ngnome-session-wayland-session-3.28.1-10.el8.s390x.rpm\ngnome-session-xsession-3.28.1-10.el8.s390x.rpm\ngnome-settings-daemon-3.32.0-11.el8.s390x.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.s390x.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.s390x.rpm\ngnome-shell-3.32.2-20.el8.s390x.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.s390x.rpm\ngnome-shell-debugsource-3.32.2-20.el8.s390x.rpm\ngnome-terminal-3.28.3-2.el8.s390x.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.s390x.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.s390x.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.s390x.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.s390x.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.s390x.rpm\ngtk-update-icon-cache-3.22.30-6.el8.s390x.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-3.22.30-6.el8.s390x.rpm\ngtk3-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-debugsource-3.22.30-6.el8.s390x.rpm\ngtk3-devel-3.22.30-6.el8.s390x.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-immodule-xim-3.22.30-6.el8.s390x.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.s390x.rpm\ngvfs-1.36.2-10.el8.s390x.rpm\ngvfs-afp-1.36.2-10.el8.s390x.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-archive-1.36.2-10.el8.s390x.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-client-1.36.2-10.el8.s390x.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-debugsource-1.36.2-10.el8.s390x.rpm\ngvfs-devel-1.36.2-10.el8.s390x.rpm\ngvfs-fuse-1.36.2-10.el8.s390x.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-goa-1.36.2-10.el8.s390x.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-gphoto2-1.36.2-10.el8.s390x.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-mtp-1.36.2-10.el8.s390x.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-smb-1.36.2-10.el8.s390x.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.s390x.rpm\nlibsoup-debuginfo-2.62.3-2.el8.s390x.rpm\nlibsoup-debugsource-2.62.3-2.el8.s390x.rpm\nlibsoup-devel-2.62.3-2.el8.s390x.rpm\nmutter-3.32.2-48.el8.s390x.rpm\nmutter-debuginfo-3.32.2-48.el8.s390x.rpm\nmutter-debugsource-3.32.2-48.el8.s390x.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.s390x.rpm\nnautilus-3.28.1-14.el8.s390x.rpm\nnautilus-debuginfo-3.28.1-14.el8.s390x.rpm\nnautilus-debugsource-3.28.1-14.el8.s390x.rpm\nnautilus-extensions-3.28.1-14.el8.s390x.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.s390x.rpm\npipewire-0.3.6-1.el8.s390x.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-debugsource-0.3.6-1.el8.s390x.rpm\npipewire-devel-0.3.6-1.el8.s390x.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-libs-0.3.6-1.el8.s390x.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-utils-0.3.6-1.el8.s390x.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire0.2-debugsource-0.2.7-6.el8.s390x.rpm\npipewire0.2-devel-0.2.7-6.el8.s390x.rpm\npipewire0.2-libs-0.2.7-6.el8.s390x.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.s390x.rpm\npotrace-1.15-3.el8.s390x.rpm\npotrace-debuginfo-1.15-3.el8.s390x.rpm\npotrace-debugsource-1.15-3.el8.s390x.rpm\npygobject3-debuginfo-3.28.3-2.el8.s390x.rpm\npygobject3-debugsource-3.28.3-2.el8.s390x.rpm\npython3-gobject-3.28.3-2.el8.s390x.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm\ntracker-2.1.5-2.el8.s390x.rpm\ntracker-debuginfo-2.1.5-2.el8.s390x.rpm\ntracker-debugsource-2.1.5-2.el8.s390x.rpm\nvte-profile-0.52.4-2.el8.s390x.rpm\nvte291-0.52.4-2.el8.s390x.rpm\nvte291-debuginfo-0.52.4-2.el8.s390x.rpm\nvte291-debugsource-0.52.4-2.el8.s390x.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.s390x.rpm\nwebkit2gtk3-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.s390x.rpm\nwebrtc-audio-processing-0.3-9.el8.s390x.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.s390x.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.s390x.rpm\nxdg-desktop-portal-1.6.0-2.el8.s390x.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.s390x.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.s390x.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.s390x.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.s390x.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.s390x.rpm\n\nx86_64:\nLibRaw-0.19.5-2.el8.i686.rpm\nLibRaw-0.19.5-2.el8.x86_64.rpm\nLibRaw-debuginfo-0.19.5-2.el8.i686.rpm\nLibRaw-debuginfo-0.19.5-2.el8.x86_64.rpm\nLibRaw-debugsource-0.19.5-2.el8.i686.rpm\nLibRaw-debugsource-0.19.5-2.el8.x86_64.rpm\nLibRaw-samples-debuginfo-0.19.5-2.el8.i686.rpm\nLibRaw-samples-debuginfo-0.19.5-2.el8.x86_64.rpm\nPackageKit-1.1.12-6.el8.x86_64.rpm\nPackageKit-command-not-found-1.1.12-6.el8.x86_64.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-cron-1.1.12-6.el8.x86_64.rpm\nPackageKit-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-debugsource-1.1.12-6.el8.i686.rpm\nPackageKit-debugsource-1.1.12-6.el8.x86_64.rpm\nPackageKit-glib-1.1.12-6.el8.i686.rpm\nPackageKit-glib-1.1.12-6.el8.x86_64.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.x86_64.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.i686.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.x86_64.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.x86_64.rpm\ndleyna-renderer-0.6.0-3.el8.x86_64.rpm\ndleyna-renderer-debuginfo-0.6.0-3.el8.x86_64.rpm\ndleyna-renderer-debugsource-0.6.0-3.el8.x86_64.rpm\nfrei0r-plugins-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.x86_64.rpm\ngdm-3.28.3-34.el8.i686.rpm\ngdm-3.28.3-34.el8.x86_64.rpm\ngdm-debuginfo-3.28.3-34.el8.i686.rpm\ngdm-debuginfo-3.28.3-34.el8.x86_64.rpm\ngdm-debugsource-3.28.3-34.el8.i686.rpm\ngdm-debugsource-3.28.3-34.el8.x86_64.rpm\ngnome-control-center-3.28.2-22.el8.x86_64.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.x86_64.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.x86_64.rpm\ngnome-photos-3.28.1-3.el8.x86_64.rpm\ngnome-photos-debuginfo-3.28.1-3.el8.x86_64.rpm\ngnome-photos-debugsource-3.28.1-3.el8.x86_64.rpm\ngnome-photos-tests-3.28.1-3.el8.x86_64.rpm\ngnome-remote-desktop-0.1.8-3.el8.x86_64.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.x86_64.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.x86_64.rpm\ngnome-session-3.28.1-10.el8.x86_64.rpm\ngnome-session-debuginfo-3.28.1-10.el8.x86_64.rpm\ngnome-session-debugsource-3.28.1-10.el8.x86_64.rpm\ngnome-session-wayland-session-3.28.1-10.el8.x86_64.rpm\ngnome-session-xsession-3.28.1-10.el8.x86_64.rpm\ngnome-settings-daemon-3.32.0-11.el8.x86_64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.x86_64.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.x86_64.rpm\ngnome-shell-3.32.2-20.el8.x86_64.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.x86_64.rpm\ngnome-shell-debugsource-3.32.2-20.el8.x86_64.rpm\ngnome-terminal-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.x86_64.rpm\ngsettings-desktop-schemas-3.32.0-5.el8.i686.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.i686.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.x86_64.rpm\ngtk-update-icon-cache-3.22.30-6.el8.x86_64.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.i686.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-3.22.30-6.el8.i686.rpm\ngtk3-3.22.30-6.el8.x86_64.rpm\ngtk3-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-debugsource-3.22.30-6.el8.i686.rpm\ngtk3-debugsource-3.22.30-6.el8.x86_64.rpm\ngtk3-devel-3.22.30-6.el8.i686.rpm\ngtk3-devel-3.22.30-6.el8.x86_64.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-immodule-xim-3.22.30-6.el8.x86_64.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.x86_64.rpm\ngvfs-1.36.2-10.el8.x86_64.rpm\ngvfs-afc-1.36.2-10.el8.x86_64.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-afp-1.36.2-10.el8.x86_64.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-archive-1.36.2-10.el8.x86_64.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-client-1.36.2-10.el8.i686.rpm\ngvfs-client-1.36.2-10.el8.x86_64.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-debugsource-1.36.2-10.el8.i686.rpm\ngvfs-debugsource-1.36.2-10.el8.x86_64.rpm\ngvfs-devel-1.36.2-10.el8.i686.rpm\ngvfs-devel-1.36.2-10.el8.x86_64.rpm\ngvfs-fuse-1.36.2-10.el8.x86_64.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-goa-1.36.2-10.el8.x86_64.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-gphoto2-1.36.2-10.el8.x86_64.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-mtp-1.36.2-10.el8.x86_64.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-smb-1.36.2-10.el8.x86_64.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.x86_64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.i686.rpm\nlibsoup-debuginfo-2.62.3-2.el8.x86_64.rpm\nlibsoup-debugsource-2.62.3-2.el8.i686.rpm\nlibsoup-debugsource-2.62.3-2.el8.x86_64.rpm\nlibsoup-devel-2.62.3-2.el8.i686.rpm\nlibsoup-devel-2.62.3-2.el8.x86_64.rpm\nmutter-3.32.2-48.el8.i686.rpm\nmutter-3.32.2-48.el8.x86_64.rpm\nmutter-debuginfo-3.32.2-48.el8.i686.rpm\nmutter-debuginfo-3.32.2-48.el8.x86_64.rpm\nmutter-debugsource-3.32.2-48.el8.i686.rpm\nmutter-debugsource-3.32.2-48.el8.x86_64.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.i686.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.x86_64.rpm\nnautilus-3.28.1-14.el8.x86_64.rpm\nnautilus-debuginfo-3.28.1-14.el8.i686.rpm\nnautilus-debuginfo-3.28.1-14.el8.x86_64.rpm\nnautilus-debugsource-3.28.1-14.el8.i686.rpm\nnautilus-debugsource-3.28.1-14.el8.x86_64.rpm\nnautilus-extensions-3.28.1-14.el8.i686.rpm\nnautilus-extensions-3.28.1-14.el8.x86_64.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.i686.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.x86_64.rpm\npipewire-0.3.6-1.el8.i686.rpm\npipewire-0.3.6-1.el8.x86_64.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-debugsource-0.3.6-1.el8.i686.rpm\npipewire-debugsource-0.3.6-1.el8.x86_64.rpm\npipewire-devel-0.3.6-1.el8.i686.rpm\npipewire-devel-0.3.6-1.el8.x86_64.rpm\npipewire-doc-0.3.6-1.el8.x86_64.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-libs-0.3.6-1.el8.i686.rpm\npipewire-libs-0.3.6-1.el8.x86_64.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-utils-0.3.6-1.el8.x86_64.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire0.2-debugsource-0.2.7-6.el8.i686.rpm\npipewire0.2-debugsource-0.2.7-6.el8.x86_64.rpm\npipewire0.2-devel-0.2.7-6.el8.i686.rpm\npipewire0.2-devel-0.2.7-6.el8.x86_64.rpm\npipewire0.2-libs-0.2.7-6.el8.i686.rpm\npipewire0.2-libs-0.2.7-6.el8.x86_64.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.i686.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.x86_64.rpm\npotrace-1.15-3.el8.i686.rpm\npotrace-1.15-3.el8.x86_64.rpm\npotrace-debuginfo-1.15-3.el8.i686.rpm\npotrace-debuginfo-1.15-3.el8.x86_64.rpm\npotrace-debugsource-1.15-3.el8.i686.rpm\npotrace-debugsource-1.15-3.el8.x86_64.rpm\npygobject3-debuginfo-3.28.3-2.el8.i686.rpm\npygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm\npygobject3-debugsource-3.28.3-2.el8.i686.rpm\npygobject3-debugsource-3.28.3-2.el8.x86_64.rpm\npython3-gobject-3.28.3-2.el8.i686.rpm\npython3-gobject-3.28.3-2.el8.x86_64.rpm\npython3-gobject-base-3.28.3-2.el8.i686.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.i686.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.i686.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm\ntracker-2.1.5-2.el8.i686.rpm\ntracker-2.1.5-2.el8.x86_64.rpm\ntracker-debuginfo-2.1.5-2.el8.i686.rpm\ntracker-debuginfo-2.1.5-2.el8.x86_64.rpm\ntracker-debugsource-2.1.5-2.el8.i686.rpm\ntracker-debugsource-2.1.5-2.el8.x86_64.rpm\nvte-profile-0.52.4-2.el8.x86_64.rpm\nvte291-0.52.4-2.el8.i686.rpm\nvte291-0.52.4-2.el8.x86_64.rpm\nvte291-debuginfo-0.52.4-2.el8.i686.rpm\nvte291-debuginfo-0.52.4-2.el8.x86_64.rpm\nvte291-debugsource-0.52.4-2.el8.i686.rpm\nvte291-debugsource-0.52.4-2.el8.x86_64.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.i686.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.x86_64.rpm\nwebkit2gtk3-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebrtc-audio-processing-0.3-9.el8.i686.rpm\nwebrtc-audio-processing-0.3-9.el8.x86_64.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.i686.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.x86_64.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.i686.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.x86_64.rpm\nxdg-desktop-portal-1.6.0-2.el8.x86_64.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.x86_64.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.x86_64.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.x86_64.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.x86_64.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\ngsettings-desktop-schemas-3.32.0-5.el8.src.rpm\nlibsoup-2.62.3-2.el8.src.rpm\npygobject3-3.28.3-2.el8.src.rpm\n\naarch64:\ngsettings-desktop-schemas-3.32.0-5.el8.aarch64.rpm\nlibsoup-2.62.3-2.el8.aarch64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.aarch64.rpm\nlibsoup-debugsource-2.62.3-2.el8.aarch64.rpm\npygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm\npygobject3-debugsource-3.28.3-2.el8.aarch64.rpm\npython3-gobject-base-3.28.3-2.el8.aarch64.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm\n\nppc64le:\ngsettings-desktop-schemas-3.32.0-5.el8.ppc64le.rpm\nlibsoup-2.62.3-2.el8.ppc64le.rpm\nlibsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm\nlibsoup-debugsource-2.62.3-2.el8.ppc64le.rpm\npygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm\npygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-base-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm\n\ns390x:\ngsettings-desktop-schemas-3.32.0-5.el8.s390x.rpm\nlibsoup-2.62.3-2.el8.s390x.rpm\nlibsoup-debuginfo-2.62.3-2.el8.s390x.rpm\nlibsoup-debugsource-2.62.3-2.el8.s390x.rpm\npygobject3-debuginfo-3.28.3-2.el8.s390x.rpm\npygobject3-debugsource-3.28.3-2.el8.s390x.rpm\npython3-gobject-base-3.28.3-2.el8.s390x.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm\n\nx86_64:\ngsettings-desktop-schemas-3.32.0-5.el8.x86_64.rpm\nlibsoup-2.62.3-2.el8.i686.rpm\nlibsoup-2.62.3-2.el8.x86_64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.i686.rpm\nlibsoup-debuginfo-2.62.3-2.el8.x86_64.rpm\nlibsoup-debugsource-2.62.3-2.el8.i686.rpm\nlibsoup-debugsource-2.62.3-2.el8.x86_64.rpm\npygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm\npygobject3-debugsource-3.28.3-2.el8.x86_64.rpm\npython3-gobject-base-3.28.3-2.el8.x86_64.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nBackground\n==========\n\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      }
    ],
    "trust": 2.52
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-3865",
        "trust": 3.4
      },
      {
        "db": "JVN",
        "id": "JVNVU95678717",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0538",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0346",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0567",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0677",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156153",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156398",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156392",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-15551",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-181990",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3865",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "id": "VAR-202002-1478",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:32:50.434000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT210923",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210923"
      },
      {
        "title": "HT210947",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210947"
      },
      {
        "title": "HT210948",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210948"
      },
      {
        "title": "HT210918",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210918"
      },
      {
        "title": "HT210920",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210920"
      },
      {
        "title": "HT210922",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210922"
      },
      {
        "title": "HT210947",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210947"
      },
      {
        "title": "HT210948",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210948"
      },
      {
        "title": "HT210918",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210918"
      },
      {
        "title": "HT210920",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210920"
      },
      {
        "title": "HT210922",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210922"
      },
      {
        "title": "HT210923",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210923"
      },
      {
        "title": "openSUSE-SU-2020:0278-1",
        "trust": 0.8,
        "url": "http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00004.html"
      },
      {
        "title": "Multiple Apple product WebKit Page Loading Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=111057"
      },
      {
        "title": "Ubuntu Security Notice: webkit2gtk vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4281-1"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2020-3865 log"
      },
      {
        "title": "Debian Security Advisories: DSA-4627-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=78d59627ce9fc1018e3643bc1e16ec00"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202002-10] webkit2gtk: multiple issues",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202002-10"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      },
      {
        "title": "Threatpost",
        "trust": 0.1,
        "url": "https://threatpost.com/apple-safari-flaws-webcam-access/154476/"
      },
      {
        "title": "BleepingComputer",
        "trust": 0.1,
        "url": "https://www.bleepingcomputer.com/news/security/apple-paid-75k-for-bugs-letting-sites-hijack-iphone-cameras/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.1
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210947"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210948"
      },
      {
        "trust": 1.8,
        "url": "http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00004.html"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-3865"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu95678717/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht210795"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht210794"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210947"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0538/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156392/webkitgtk-wpe-webkit-dos-logic-issue-code-execution.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156153/apple-security-advisory-2020-1-29-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156398/debian-security-advisory-4627-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0346/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-five-vulnerabilities-31619"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0567/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0677/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.4,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4281-1/"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/site/solutions/537113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-02-27T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "date": "2020-02-27T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-11-04T15:24:00",
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2020-01-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "date": "2020-03-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "date": "2020-02-27T21:15:17.990000",
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181990"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3865"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      },
      {
        "date": "2020-03-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      },
      {
        "date": "2024-11-21T05:31:51.763000",
        "db": "NVD",
        "id": "CVE-2020-3865"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Product Corruption Vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002349"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1411"
      }
    ],
    "trust": 0.6
  }
}

VAR-202002-1182

Vulnerability from variot - Updated: 2025-12-22 23:28

A logic issue was addressed with improved state management. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to universal cross site scripting. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple tvOS is a smart TV operating system. The product supports storage of music, photos, App and contacts, etc. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. A remote attacker may be able to cause arbitrary code execution. A remote attacker may be able to cause arbitrary code execution. (CVE-2020-3902). In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2020-1-29-2 iCloud for Windows 10.9.2

iCloud for Windows 10.9.2 is now available and addresses the following:

ImageIO Available for: Windows 10 and later via the Microsoft Store Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-3826: Samuel Groß of Google Project Zero

libxml2 Available for: Windows 10 and later via the Microsoft Store Impact: Processing maliciously crafted XML may lead to an unexpected application termination or arbitrary code execution Description: A buffer overflow was addressed with improved size validation. CVE-2020-3865: Ryan Pickren (ryanpickren.com)

Installation note:

iCloud for Windows 10.9.2 may be obtained from: https://support.apple.com/HT204283

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl4x9jMACgkQBz4uGe3y 0M1Apw/+PrQvBheHkxIo2XjPOyTxO+M8mlaU+6gY7Ue14zivPO20JqRLb34FyNfh iE+RSJ3NB/0cdZIUH1xcrKzK+tmVFVETJaBmLmoTHBy3946DQtUvditLfTHYnYzC peJbdG4UyevVwf/AoED5iI89lf/ADOWm9Xu0LVtvDKyTAFewQp9oOlG731twL9iI 6ojuzYokYzJSWcDlLMTFB4sDpZsNEz2Crf+WZ44r5bHKcSTi7HzS+OPueQ6dSdqi Y9ioDv/SB0dnLJZE2wq6eaFL2t7eXelYUSL7SekXI4aYQkhaOQFabutFuYNoOX4e +ctnbSdVT5WjG7tyg9L7bl4m1q8GgH43OLBmA1Z/gps004PHMQ87cRRjvGGKIQOf YMI0VBqFc6cAnDYh4Oun31gbg9Y1llYYwTQex7gjx9U+v3FKOaxWxQg8N9y4d2+v qsStr7HKVKcRE/LyEx4fA44VoKNHyHZ4LtQSeX998MTapyH5XbbHEWr/K4TcJ8ij 6Zv/GkUKeINDJbRFhiMJtGThTw5dba5sfHfVv88NrbNYcwwVQrvlkfMq8Jrn0YEf rahjCDLigXXbyaqxM57feJ9+y6jHpULeywomGv+QEzyALTdGKIaq7w1pwLdOHizi Lcxvr8FxmUxydrvFJSUDRa9ELigIsLmgPB3l1UiUmd3AQ38ymJw= =tRpr -----END PGP SIGNATURE-----= . ------------------------------------------------------------------------ WebKitGTK and WPE WebKit Security Advisory WSA-2020-0002


Date reported : February 14, 2020 Advisory ID : WSA-2020-0002 WebKitGTK Advisory URL : https://webkitgtk.org/security/WSA-2020-0002.html WPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2020-0002.html CVE identifiers : CVE-2020-3862, CVE-2020-3864, CVE-2020-3865, CVE-2020-3867, CVE-2020-3868.

Several vulnerabilities were discovered in WebKitGTK and WPE WebKit.

CVE-2020-3862 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Srikanth Gatta of Google Chrome.

CVE-2020-3864 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Ryan Pickren (ryanpickren.com).

CVE-2020-3865 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Ryan Pickren (ryanpickren.com).

CVE-2020-3867 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to an anonymous researcher.

CVE-2020-3868 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Marcin Towalski of Cisco Talos.

We recommend updating to the latest stable versions of WebKitGTK and WPE WebKit. It is the best way to ensure that you are running safe versions of WebKit. Please check our websites for information about the latest stable releases.

Further information about WebKitGTK and WPE WebKit security advisories can be found at: https://webkitgtk.org/security.html or https://wpewebkit.org/security/.

The WebKitGTK and WPE WebKit team, February 14, 2020

. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202003-22


                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202002-1182",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.5"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.4"
      },
      {
        "model": "webkitgtk",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "webkitgtk",
        "version": "2.26.4"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.17"
      },
      {
        "model": "leap",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "opensuse",
        "version": "15.1"
      },
      {
        "model": "icloud",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (ipod touch \u7b2c 7 \u4e16\u4ee3)"
      },
      {
        "model": "leap",
        "scope": null,
        "trust": 0.8,
        "vendor": "opensuse",
        "version": null
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (iphone 6s \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.5 \u672a\u6e80 (macos mojave)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.5 \u672a\u6e80 (macos high sierra)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (ipad air 2 \u4ee5\u964d)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.17 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (apple tv hd)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (apple tv 4k)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3.1 \u672a\u6e80 (ipad mini 4 \u4ee5\u964d)"
      },
      {
        "model": "itunes",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 12.10.4 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.5 \u672a\u6e80 (macos catalina)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 10.9.2 \u672a\u6e80 (windows 10 \u4ee5\u964d)"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/o:opensuse_project:leap",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple,Debian,WebKitGTK+ Team",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2020-3867",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-3867",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Medium",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 4.3,
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-002339",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "VHN-181992",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.1,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 2.8,
            "id": "CVE-2020-3867",
            "impactScore": 2.7,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.1,
            "baseSeverity": "Medium",
            "confidentialityImpact": "Low",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-002339",
            "impactScore": null,
            "integrityImpact": "Low",
            "privilegesRequired": "None",
            "scope": "Changed",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-3867",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-002339",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202001-1412",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-181992",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-3867",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A logic issue was addressed with improved state management. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to universal cross site scripting. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple tvOS is a smart TV operating system. The product supports storage of music, photos, App and contacts, etc. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. A remote attacker may be able to cause arbitrary code execution. A remote attacker may be able to cause arbitrary code execution. (CVE-2020-3902). In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2020-1-29-2 iCloud for Windows 10.9.2\n\niCloud for Windows 10.9.2 is now available and addresses the\nfollowing:\n\nImageIO\nAvailable for: Windows 10 and later via the Microsoft Store\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-3826: Samuel Gro\u00df of Google Project Zero\n\nlibxml2\nAvailable for: Windows 10 and later via the Microsoft Store\nImpact: Processing maliciously crafted XML may lead to an unexpected\napplication termination or arbitrary code execution\nDescription: A buffer overflow was addressed with improved size\nvalidation. \nCVE-2020-3865: Ryan Pickren (ryanpickren.com)\n\nInstallation note:\n\niCloud for Windows 10.9.2 may be obtained from:\nhttps://support.apple.com/HT204283\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl4x9jMACgkQBz4uGe3y\n0M1Apw/+PrQvBheHkxIo2XjPOyTxO+M8mlaU+6gY7Ue14zivPO20JqRLb34FyNfh\niE+RSJ3NB/0cdZIUH1xcrKzK+tmVFVETJaBmLmoTHBy3946DQtUvditLfTHYnYzC\npeJbdG4UyevVwf/AoED5iI89lf/ADOWm9Xu0LVtvDKyTAFewQp9oOlG731twL9iI\n6ojuzYokYzJSWcDlLMTFB4sDpZsNEz2Crf+WZ44r5bHKcSTi7HzS+OPueQ6dSdqi\nY9ioDv/SB0dnLJZE2wq6eaFL2t7eXelYUSL7SekXI4aYQkhaOQFabutFuYNoOX4e\n+ctnbSdVT5WjG7tyg9L7bl4m1q8GgH43OLBmA1Z/gps004PHMQ87cRRjvGGKIQOf\nYMI0VBqFc6cAnDYh4Oun31gbg9Y1llYYwTQex7gjx9U+v3FKOaxWxQg8N9y4d2+v\nqsStr7HKVKcRE/LyEx4fA44VoKNHyHZ4LtQSeX998MTapyH5XbbHEWr/K4TcJ8ij\n6Zv/GkUKeINDJbRFhiMJtGThTw5dba5sfHfVv88NrbNYcwwVQrvlkfMq8Jrn0YEf\nrahjCDLigXXbyaqxM57feJ9+y6jHpULeywomGv+QEzyALTdGKIaq7w1pwLdOHizi\nLcxvr8FxmUxydrvFJSUDRa9ELigIsLmgPB3l1UiUmd3AQ38ymJw=\n=tRpr\n-----END PGP SIGNATURE-----=\n. ------------------------------------------------------------------------\nWebKitGTK and WPE WebKit Security Advisory                 WSA-2020-0002\n------------------------------------------------------------------------\n\nDate reported           : February 14, 2020\nAdvisory ID             : WSA-2020-0002\nWebKitGTK Advisory URL  : https://webkitgtk.org/security/WSA-2020-0002.html\nWPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2020-0002.html\nCVE identifiers         : CVE-2020-3862, CVE-2020-3864, CVE-2020-3865,\n                          CVE-2020-3867, CVE-2020-3868. \n\nSeveral vulnerabilities were discovered in WebKitGTK and WPE WebKit. \n\nCVE-2020-3862\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Srikanth Gatta of Google Chrome. \n\nCVE-2020-3864\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Ryan Pickren (ryanpickren.com). \n\nCVE-2020-3865\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Ryan Pickren (ryanpickren.com). \n\nCVE-2020-3867\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to an anonymous researcher. \n\nCVE-2020-3868\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Marcin Towalski of Cisco Talos. \n\n\nWe recommend updating to the latest stable versions of WebKitGTK and WPE\nWebKit. It is the best way to ensure that you are running safe versions\nof WebKit. Please check our websites for information about the latest\nstable releases. \n\nFurther information about WebKitGTK and WPE WebKit security advisories\ncan be found at: https://webkitgtk.org/security.html or\nhttps://wpewebkit.org/security/. \n\nThe WebKitGTK and WPE WebKit team,\nFebruary 14, 2020\n\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nBackground\n==========\n\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      }
    ],
    "trust": 2.52
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-3867",
        "trust": 3.4
      },
      {
        "db": "JVN",
        "id": "JVNVU95678717",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "156392",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0538",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0346",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0567",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0677",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156153",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156398",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-15548",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-181992",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3867",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "156152",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "id": "VAR-202002-1182",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:28:09.983000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT210918",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210918"
      },
      {
        "title": "HT210920",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210920"
      },
      {
        "title": "HT210922",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210922"
      },
      {
        "title": "HT210947",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210947"
      },
      {
        "title": "HT210923",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210923"
      },
      {
        "title": "HT210948",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210948"
      },
      {
        "title": "HT210918",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210918"
      },
      {
        "title": "HT210947",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210947"
      },
      {
        "title": "HT210920",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210920"
      },
      {
        "title": "HT210948",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210948"
      },
      {
        "title": "HT210922",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210922"
      },
      {
        "title": "HT210923",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT210923"
      },
      {
        "title": "openSUSE-SU-2020:0278-1",
        "trust": 0.8,
        "url": "https://lists.opensuse.org/opensuse-security-announce/2020-03/msg00004.html"
      },
      {
        "title": "Multiple Apple product WebKit Fixes for component cross-site scripting vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=111058"
      },
      {
        "title": "Ubuntu Security Notice: webkit2gtk vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4281-1"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2020-3867 log"
      },
      {
        "title": "Debian Security Advisories: DSA-4627-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=78d59627ce9fc1018e3643bc1e16ec00"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202002-10] webkit2gtk: multiple issues",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202002-10"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-79",
        "trust": 1.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210947"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210948"
      },
      {
        "trust": 1.8,
        "url": "http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00004.html"
      },
      {
        "trust": 1.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-3867"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu95678717/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht210795"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht210794"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210947"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0538/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156392/webkitgtk-wpe-webkit-dos-logic-issue-code-execution.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156153/apple-security-advisory-2020-1-29-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156398/debian-security-advisory-4627-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0346/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-five-vulnerabilities-31619"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0567/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0677/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/79.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4281-1/"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3826"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht204283"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3825"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3846"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security.html"
      },
      {
        "trust": 0.1,
        "url": "https://wpewebkit.org/security/wsa-2020-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2020-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://wpewebkit.org/security/."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-02-27T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "date": "2020-02-27T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-01-30T14:46:23",
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "date": "2020-02-17T15:02:22",
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2020-01-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "date": "2020-03-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "date": "2020-02-27T21:15:18.130000",
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-12-22T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181992"
      },
      {
        "date": "2021-12-22T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3867"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      },
      {
        "date": "2020-03-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      },
      {
        "date": "2024-11-21T05:31:51.980000",
        "db": "NVD",
        "id": "CVE-2020-3867"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Cross-site scripting vulnerabilities in products",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-002339"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "XSS",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1412"
      }
    ],
    "trust": 0.6
  }
}

VAR-201912-1851

Vulnerability from variot - Updated: 2025-12-22 23:27

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple watchOS earlier than 6.1; Safari earlier than 13.0.3; iOS earlier than 13.2; iPadOS earlier than 13.2; tvOS earlier than 13.2; Windows-based iTunes version 12.10.2. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. 1865938 - CSIDrivers missing in OCS 4.6 1867024 - [ocs-operator] operator v4.6.0-519.ci is in Installing state 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868060 - [External Cluster] Noobaa-default-backingstore PV in released state upon OCS 4.5 uninstall (Secret not found) 1868703 - [rbd] After volume expansion, the new size is not reflected on the pod 1869411 - capture full crash information from ceph 1870061 - [RHEL][IBM] OCS un-install should make the devices raw 1870338 - OCS 4.6 must-gather : ocs-must-gather-xxx-helper pod in ContainerCreationError (couldn't find key admin-secret) 1870631 - OCS 4.6 Deployment : RGW pods went into 'CrashLoopBackOff' state on Z Platform 1872119 - Updates don't work on StorageClass which will keep PV expansion disabled for upgraded cluster 1872696 - [ROKS][RFE]NooBaa Configure IBM COS as default backing store 1873864 - Noobaa: On an baremetal RHCOS cluster, some backingstores are stuck in PROGRESSING state with INVALID_ENDPOINT TemporaryError 1874606 - CVE-2020-7720 nodejs-node-forge: prototype pollution via the util.setPath function 1875476 - Change noobaa logo in the noobaa UI 1877339 - Incorrect use of logr 1877371 - NooBaa UI warning message on Deploy Kubernetes Pool process - typo and shown number is incorrect 1878153 - OCS 4.6 must-gather: collect node information under cluster_scoped_resources/oc_output directory 1878714 - [FIPS enabled] BadDigest error on file upload to noobaa bucket 1878853 - [External Mode] ceph-external-cluster-details-exporter.py does not tolerate TLS enabled RGW 1879008 - ocs-osd-removal job fails because it can't find admin-secret in rook-ceph-mon secret 1879072 - Deployment with encryption at rest is failing to bring up OSD pods 1879919 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed 1880255 - Collect rbd info and subvolume info and snapshot info command output 1881028 - CVE-2020-8237 nodejs-json-bigint: Prototype pollution via __proto__ assignment could result in DoS 1881071 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed 1882397 - MCG decompression problem with snappy on s390x arch 1883253 - CSV doesn't contain values required for UI to enable minimal deployment and cluster encryption 1883398 - Update csi sidecar containers in rook 1883767 - Using placement strategies in cluster-service.yaml causes ocs-operator to crash 1883810 - [External mode] RGW metrics is not available after OCS upgrade from 4.5 to 4.6 1883927 - Deployment with encryption at rest is failing to bring up OSD pods 1885175 - Handle disappeared underlying device for encrypted OSD 1885428 - panic seen in rook-ceph during uninstall - "close of closed channel" 1885648 - [Tracker for https://bugzilla.redhat.com/show_bug.cgi?id=1885700] FSTYPE for localvolumeset devices shows up as ext2 after uninstall 1885971 - ocs-storagecluster-cephobjectstore doesn't report true state of RGW 1886308 - Default VolumeSnapshot Classes not created in External Mode 1886348 - osd removal job failed with status "Error" 1886551 - Clone creation failed after timeout of 5 hours of Azure platrom for 3 CephFS PVCs ( PVC sizes: 1, 25 and 100 GB) 1886709 - [External] RGW storageclass disappears after upgrade from OCS 4.5 to 4.6 1886859 - OCS 4.6: Uninstall stuck indefinitely if any Ceph pods are in Pending state before uninstall 1886873 - [OCS 4.6 External/Internal Uninstall] - Storage Cluster deletion stuck indefinitely, "failed to delete object store", remaining users: [noobaa-ceph-objectstore-user] 1888583 - [External] When deployment is attempted without specifying the monitoring-endpoint while generating JSON, the CSV is stuck in installing state 1888593 - [External] Add validation for monitoring-endpoint and port in the exporter script 1888614 - [External] Unreachable monitoring-endpoint used during deployment causes ocs-operator to crash 1889441 - Traceback error message while running OCS 4.6 must-gather 1889683 - [GSS] Noobaa Problem when setting public access to a bucket 1889866 - Post node power off/on, an unused MON PVC still stays back in the cluster 1890183 - [External] ocs-operator logs are filled with "failed to reconcile metrics exporter" 1890638 - must-gather helper pod should be deleted after collecting ceph crash info 1890971 - [External] RGW metrics are not available if anything else except 9283 is provided as the monitoring-endpoint-port 1891856 - ocs-metrics-exporter pod should have tolerations for OCS taint 1892206 - [GSS] Ceph image/version mismatch 1892234 - clone #95 creation failed for CephFS PVC ( 10 GB PVC size) during multiple clones creation test 1893624 - Must Gather is not collecting the tar file from NooBaa diagnose 1893691 - OCS4.6 must_gather failes to complete in 600sec 1893714 - Bad response for upload an object with encryption 1895402 - Mon pods didn't get upgraded in 720 second timeout from OCS 4.5 upgrade to 4.6 1896298 - [RFE] Monitoring for Namespace buckets and resources 1896831 - Clone#452 for RBD PVC ( PVC size 1 GB) failed to be created for 600 secs 1898521 - [CephFS] Deleting cephfsplugin pod along with app pods will make PV remain in Released state after deleting the PVC 1902627 - must-gather should wait for debug pods to be in ready state 1904171 - RGW Service is unavailable for a short period during upgrade to OCS 4.6

Bug Fix(es): * NVD feed fixed in Clair-v2 (clair-jwt image)

  1. Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-29-3 tvOS 13.2

tvOS 13.2 is now available and addresses the following:

Accounts Available for: Apple TV 4K and Apple TV HD Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at Technische Universität Darmstadt

App Store Available for: Apple TV 4K and Apple TV HD Impact: A local attacker may be able to login to the account of a previously logged in user without valid credentials. CVE-2019-8803: Kiyeon An, 차민규 (CHA Minkyu)

Audio Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8785: Ian Beer of Google Project Zero CVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure

AVEVideoEncoder Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure

File System Events Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8798: ABC Research s.r.o. working with Trend Micro's Zero Day Initiative

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to read restricted memory Description: A validation issue was addressed with improved input sanitization. CVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8782: Cheolung Lee of LINE+ Security Team CVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team CVE-2019-8808: found by OSS-Fuzz CVE-2019-8811: Soyeon Park of SSLab at Georgia Tech CVE-2019-8812: an anonymous researcher CVE-2019-8814: Cheolung Lee of LINE+ Security Team CVE-2019-8816: Soyeon Park of SSLab at Georgia Tech CVE-2019-8819: Cheolung Lee of LINE+ Security Team CVE-2019-8820: Samuel Groß of Google Project Zero CVE-2019-8821: Sergei Glazunov of Google Project Zero CVE-2019-8822: Sergei Glazunov of Google Project Zero CVE-2019-8823: Sergei Glazunov of Google Project Zero

WebKit Process Model Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2019-8815: Apple

Additional recognition

CFNetwork We would like to acknowledge Lily Chen of Google for their assistance.

Kernel We would like to acknowledge Jann Horn of Google Project Zero for their assistance.

WebKit We would like to acknowledge Dlive of Tencent's Xuanwu Lab and Zhiyi Zhang of Codesafe Team of Legendsec at Qi'anxin Group for their assistance.

Installation note:

Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software."

To check the current version of software, select "Settings -> General -> About."

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt c2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2DIQ/9 FQmnN+1/tdXaFFI1PtdJ9hgXONcdsi+D05mREDTX7v0VaLzChX/N3DccI00Z1uT5 VNKHRjInGYDZoO/UntzAWoZa+tcueaY23XhN9xTYrUlt1Ol1gIsaxTEgPtax4B9A PoqWb6S+oK1SHUxglGnlLtXkcyt3WHJ5iqan7BM9XX6dsriwgoBgKADpFi3FCXoa cFIvpoM6ZhxYyMPpxmMc1IRwgjDwOn2miyjkSaAONXw5R5YGRxSsjq+HkzYE3w1m m2NZElUB1nRmlyuU3aMsHUTxwAnfzryPiHRGUTcNZao39YBsyWz56sr3++g7qmnD uZZzBnISQpC6oJCWclw3UHcKHH+V0+1q059GHBoku6Xmkc5bPRnKdFgSf5OvyQUw XGjwL5UbGB5eTtdj/Kx5Rd/m5fFIUxVu7HB3bGQGhYHIc9iTdi9j3mCd3nOHCIEj Re07c084jl2Git4sH2Tva7tOqFyI2IyNVJ0LjBXO54fAC2mtFz3mkDFxCEzL5V92 O/Wct2T6OpYghzkrOOlUEAQJTbwJjTZBWsUubcOoJo6P9JUPBDJKB0ibAaCWrt9I 8OU5wRr3q0fTA3N/qdGGbQ/tgUwiMHGuqrHMv0XYGPfO5Qg5GuHpTYchZrP5nFwf ziQuQtO92b1FA4sDI+ue1sDIG84tPrkTAeLmveBjezc= =KsmX -----END PGP SIGNATURE-----

. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:0055

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
  • grafana: Snapshot authentication bypass (CVE-2021-39226)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
  • grafana: directory traversal vulnerability (CVE-2021-43813)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64

The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x

The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le

The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform 1976894 - Unidling a StatefulSet does not work as expected 1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases 1977414 - Build Config timed out waiting for condition 400: Bad Request 1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus 1978528 - systemd-coredump started and failed intermittently for unknown reasons 1978581 - machine-config-operator: remove runlevel from mco namespace 1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable 1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9 1979966 - OCP builds always fail when run on RHEL7 nodes 1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading 1981549 - Machine-config daemon does not recover from broken Proxy configuration 1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel] 1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues 1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page 1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands 1982662 - Workloads - DaemonSets - Add storage: i18n misses 1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters 1983758 - upgrades are failing on disruptive tests 1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma" 1984592 - global pull secret not working in OCP4.7.4+ for additional private registries 1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs 1985486 - Cluster Proxy not used during installation on OSP with Kuryr 1985724 - VM Details Page missing translations 1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted 1985933 - Downstream image registry recommendation 1985965 - oVirt CSI driver does not report volume stats 1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding" 1986237 - "MachineNotYetDeleted" in Pending state , alert not fired 1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid" 1986302 - console continues to fetch prometheus alert and silences for normal user 1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI 1986338 - error creating list of resources in Import YAML 1986502 - yaml multi file dnd duplicates previous dragged files 1986819 - fix string typos for hot-plug disks 1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure 1987136 - Declare operatorframework.io/arch. labels for all operators 1987257 - Go-http-client user-agent being used for oc adm mirror requests 1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold 1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP 1988406 - SSH key dropped when selecting "Customize virtual machine" in UI 1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade 1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs 1989438 - expected replicas is wrong 1989502 - Developer Catalog is disappearing after short time 1989843 - 'More' and 'Show Less' functions are not translated on several page 1990014 - oc debug does not work for Windows pods 1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created 1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page 1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar 1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI 1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks 1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var 1990625 - Ironic agent registers with SLAAC address with privacy-stable 1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time 1991067 - github.com can not be resolved inside pods where cluster is running on openstack. 1991573 - Enable typescript strictNullCheck on network-policies files 1991641 - Baremetal Cluster Operator still Available After Delete Provisioning 1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator 1991819 - Misspelled word "ocurred" in oc inspect cmd 1991942 - Alignment and spacing fixes 1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked 1992453 - The configMap failed to save on VM environment tab 1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab 1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab 1992509 - Could not customize boot source due to source PVC not found 1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1992580 - storageProfile should stay with the same value by check/uncheck the apply button 1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply 1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios 1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed) 1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing 1994094 - Some hardcodes are detected at the code level in OpenShift console components 1994142 - Missing required cloud config fields for IBM Cloud 1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools 1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart 1995335 - [SCALE] ovnkube CNI: remove ovs flows check 1995493 - Add Secret to workload button and Actions button are not aligned on secret details page 1995531 - Create RDO-based Ironic image to be promoted to OKD 1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator 1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread 1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole 1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN 1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down 1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page 1996647 - Provide more useful degraded message in auth operator on DNS errors 1996736 - Large number of 501 lr-policies in INCI2 env 1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes 1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP 1996928 - Enable default operator indexes on ARM 1997028 - prometheus-operator update removes env var support for thanos-sidecar 1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used 1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. 1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI 1997269 - Have to refresh console to install kube-descheduler 1997478 - Storage operator is not available after reboot cluster instances 1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 1997967 - storageClass is not reserved from default wizard to customize wizard 1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order 1998038 - [e2e][automation] add tests for UI for VM disk hot-plug 1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus 1998174 - Create storageclass gp3-csi after install ocp cluster on aws 1998183 - "r: Bad Gateway" info is improper 1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected 1998377 - Filesystem table head is not full displayed in disk tab 1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory 1998519 - Add fstype when create localvolumeset instance on web console 1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses 1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page 1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable 1999091 - Console update toast notification can appear multiple times 1999133 - removing and recreating static pod manifest leaves pod in error state 1999246 - .indexignore is not ingore when oc command load dc configuration 1999250 - ArgoCD in GitOps operator can't manage namespaces 1999255 - ovnkube-node always crashes out the first time it starts 1999261 - ovnkube-node log spam (and security token leak?) 1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page 1999314 - console-operator is slow to mark Degraded as False once console starts working 1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck) 1999556 - "master" pool should be updated before the CVO reports available at the new version occurred 1999578 - AWS EFS CSI tests are constantly failing 1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages 1999619 - cloudinit is malformatted if a user sets a password during VM creation flow 1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow 1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined 1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) 1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource 1999771 - revert "force cert rotation every couple days for development" in 4.10 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. 1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions 1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form 1999983 - No way to clear upload error from template boot source 2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter 2000096 - Git URL is not re-validated on edit build-config form reload 2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig 2000236 - Confusing usage message from dynkeepalived CLI 2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported 2000430 - bump cluster-api-provider-ovirt version in installer 2000450 - 4.10: Enable static PV multi-az test 2000490 - All critical alerts shipped by CMO should have links to a runbook 2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded) 2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster 2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled 2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console 2000754 - IPerf2 tests should be lower 2000846 - Structure logs in the entire codebase of Local Storage Operator 2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24 2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2000938 - CVO does not respect changes to a Deployment strategy 2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy 2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone 2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole 2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error 2001337 - Details Card in ODF Dashboard mentions OCS 2001339 - fix text content hotplug 2001413 - [e2e][automation] add/delete nic and disk to template 2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log 2001442 - Empty termination.log file for the kube-apiserver has too permissive mode 2001479 - IBM Cloud DNS unable to create/update records 2001566 - Enable alerts for prometheus operator in UWM 2001575 - Clicking on the perspective switcher shows a white page with loader 2001577 - Quick search placeholder is not displayed properly when the search string is removed 2001578 - [e2e][automation] add tests for vm dashboard tab 2001605 - PVs remain in Released state for a long time after the claim is deleted 2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options 2001620 - Cluster becomes degraded if it can't talk to Manila 2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF 2001761 - Unable to apply cluster operator storage for SNO on GCP platform. 2001765 - Some error message in the log of diskmaker-manager caused confusion 2001784 - show loading page before final results instead of showing a transient message No log files exist 2001804 - Reload feature on Environment section in Build Config form does not work properly 2001810 - cluster admin unable to view BuildConfigs in all namespaces 2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well 2001823 - OCM controller must update operator status 2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start 2001835 - Could not select image tag version when create app from dev console 2001855 - Add capacity is disabled for ocs-storagecluster 2001856 - Repeating event: MissingVersion no image found for operand pod 2001959 - Side nav list borders don't extend to edges of container 2002007 - Layout issue on "Something went wrong" page 2002010 - ovn-kube may never attempt to retry a pod creation 2002012 - Cannot change volume mode when cloning a VM from a template 2002027 - Two instances of Dotnet helm chart show as one in topology 2002075 - opm render does not automatically pulling in the image(s) used in the deployments 2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster 2002125 - Network policy details page heading should be updated to Network Policy details 2002133 - [e2e][automation] add support/virtualization and improve deleteResource 2002134 - [e2e][automation] add test to verify vm details tab 2002215 - Multipath day1 not working on s390x 2002238 - Image stream tag is not persisted when switching from yaml to form editor 2002262 - [vSphere] Incorrect user agent in vCenter sessions list 2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector 2002276 - OLM fails to upgrade operators immediately 2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods 2002354 - Missing DU configuration "Done" status reporting during ZTP flow 2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs 2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation 2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN 2002397 - Resources search is inconsistent 2002434 - CRI-O leaks some children PIDs 2002443 - Getting undefined error on create local volume set page 2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy 2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods. 2002559 - User preference for topology list view does not follow when a new namespace is created 2002567 - Upstream SR-IOV worker doc has broken links 2002588 - Change text to be sentence case to align with PF 2002657 - ovn-kube egress IP monitoring is using a random port over the node network 2002713 - CNO: OVN logs should have millisecond resolution 2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event 2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite 2002763 - Two storage systems getting created with external mode RHCS 2002808 - KCM does not use web identity credentials 2002834 - Cluster-version operator does not remove unrecognized volume mounts 2002896 - Incorrect result return when user filter data by name on search page 2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- " 2003096 - [e2e][automation] check bootsource URL is displaying on review step 2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role 2003120 - CI: Uncaught error with ResizeObserver on operand details page 2003145 - Duplicate operand tab titles causes "two children with the same key" warning 2003164 - OLM, fatal error: concurrent map writes 2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form 2003193 - Kubelet/crio leaks netns and veth ports in the host 2003195 - OVN CNI should ensure host veths are removed 2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images 2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI 2003244 - Revert libovsdb client code 2003251 - Patternfly components with list element has list item bullet when they should not. 2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI 2003269 - Rejected pods should be filtered from admission regression 2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release 2003426 - [e2e][automation] add test for vm details bootorder 2003496 - [e2e][automation] add test for vm resources requirment settings 2003641 - All metal ipi jobs are failing in 4.10 2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state 2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node 2003683 - Samples operator is panicking in CI 2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page 2003715 - Error on creating local volume set after selection of the volume mode 2003743 - Remove workaround keeping /boot RW for kdump support 2003775 - etcd pod on CrashLoopBackOff after master replacement procedure 2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver 2003792 - Monitoring metrics query graph flyover panel is useless 2003808 - Add Sprint 207 translations 2003845 - Project admin cannot access image vulnerabilities view 2003859 - sdn emits events with garbage messages 2003896 - (release-4.10) ApiRequestCounts conditional gatherer 2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas 2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes 2004059 - [e2e][automation] fix current tests for downstream 2004060 - Trying to use basic spring boot sample causes crash on Firefox 2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection 2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently 2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver 2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory 2004449 - Boot option recovery menu prevents image boot 2004451 - The backup filename displayed in the RecentBackup message is incorrect 2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts 2004508 - TuneD issues with the recent ConfigParser changes. 2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions 2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs 2004578 - Monitoring and node labels missing for an external storage platform 2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days 2004596 - [4.10] Bootimage bump tracker 2004597 - Duplicate ramdisk log containers running 2004600 - Duplicate ramdisk log containers running 2004609 - output of "crictl inspectp" is not complete 2004625 - BMC credentials could be logged if they change 2004632 - When LE takes a large amount of time, multiple whereabouts are seen 2004721 - ptp/worker custom threshold doesn't change ptp events threshold 2004736 - [knative] Create button on new Broker form is inactive despite form being filled 2004796 - [e2e][automation] add test for vm scheduling policy 2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque 2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card 2004901 - [e2e][automation] improve kubevirt devconsole tests 2004962 - Console frontend job consuming too much CPU in CI 2005014 - state of ODF StorageSystem is misreported during installation or uninstallation 2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines 2005179 - pods status filter is not taking effect 2005182 - sync list of deprecated apis about to be removed 2005282 - Storage cluster name is given as title in StorageSystem details page 2005355 - setuptools 58 makes Kuryr CI fail 2005407 - ClusterNotUpgradeable Alert should be set to Severity Info 2005415 - PTP operator with sidecar api configured throws bind: address already in use 2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console 2005554 - The switch status of the button "Show default project" is not revealed correctly in code 2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 2005761 - QE - Implementing crw-basic feature file 2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow 2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty 2005854 - SSH NodePort service is created for each VM 2005901 - KS, KCM and KA going Degraded during master nodes upgrade 2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user 2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics 2005971 - Change telemeter to report the Application Services product usage metrics 2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files 2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased 2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types 2006101 - Power off fails for drivers that don't support Soft power off 2006243 - Metal IPI upgrade jobs are running out of disk space 2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address 2006308 - Backing Store YAML tab on click displays a blank screen on UI 2006325 - Multicast is broken across nodes 2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators 2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource 2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception" 2006714 - add retry for etcd errors in kube-apiserver 2006767 - KubePodCrashLooping may not fire 2006803 - Set CoreDNS cache entries for forwarded zones 2006861 - Add Sprint 207 part 2 translations 2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap 2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors 2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded 2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick 2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails 2007271 - CI Integration for Knative test cases 2007289 - kubevirt tests are failing in CI 2007322 - Devfile/Dockerfile import does not work for unsupported git host 2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2008521 - gcp-hostname service should correct invalid search entries in resolv.conf 2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount 2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror 2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers 2008599 - Azure Stack UPI does not have Internal Load Balancer 2008612 - Plugin asset proxy does not pass through browser cache headers 2008712 - VPA webhook timeout prevents all pods from starting 2008733 - kube-scheduler: exposed /debug/pprof port 2008911 - Prometheus repeatedly scaling prometheus-operator replica set 2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12 2009055 - Instances of OCS to be replaced with ODF on UI 2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs 2009083 - opm blocks pruning of existing bundles during add 2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances 2009131 - [e2e][automation] add more test about vmi 2009148 - [e2e][automation] test vm nic presets and options 2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator 2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family 2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted 2009384 - UI changes to support BindableKinds CRD changes 2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped 2009424 - Deployment upgrade is failing availability check 2009454 - Change web terminal subscription permissions from get to list 2009465 - container-selinux should come from rhel8-appstream 2009514 - Bump OVS to 2.16-15 2009555 - Supermicro X11 system not booting from vMedia with AI 2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points 2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow 2009699 - Failure to validate flavor RAM 2009754 - Footer is not sticky anymore in import forms 2009785 - CRI-O's version file should be pinned by MCO 2009791 - Installer: ibmcloud ignores install-config values 2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13 2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo 2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2009873 - Stale Logical Router Policies and Annotations for a given node 2009879 - There should be test-suite coverage to ensure admin-acks work as expected 2009888 - SRO package name collision between official and community version 2010073 - uninstalling and then reinstalling sriov-network-operator is not working 2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. 2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default 2013751 - Service details page is showing wrong in-cluster hostname 2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page 2013871 - Resource table headings are not aligned with their column data 2013895 - Cannot enable accelerated network via MachineSets on Azure 2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude" 2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG) 2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain 2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab) 2013996 - Project detail page: Action "Delete Project" does nothing for the default project 2014071 - Payload imagestream new tags not properly updated during cluster upgrade 2014153 - SRIOV exclusive pooling 2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace 2014238 - AWS console test is failing on importing duplicate YAML definitions 2014245 - Several aria-labels, external links, and labels aren't internationalized 2014248 - Several files aren't internationalized 2014352 - Could not filter out machine by using node name on machines page 2014464 - Unexpected spacing/padding below navigation groups in developer perspective 2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages 2014486 - Integration Tests: OLM single namespace operator tests failing 2014488 - Custom operator cannot change orders of condition tables 2014497 - Regex slows down different forms and creates too much recursion errors in the log 2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id' 2014614 - Metrics scraping requests should be assigned to exempt priority level 2014710 - TestIngressStatus test is broken on Azure 2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly 2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile 2015115 - [RFE] PCI passthrough 2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter 2015154 - Support ports defined networks and primarySubnet 2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic 2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production 2015386 - Possibility to add labels to the built-in OCP alerts 2015395 - Table head on Affinity Rules modal is not fully expanded 2015416 - CI implementation for Topology plugin 2015418 - Project Filesystem query returns No datapoints found 2015420 - No vm resource in project view's inventory 2015422 - No conflict checking on snapshot name 2015472 - Form and YAML view switch button should have distinguishable status 2015481 - [4.10] sriov-network-operator daemon pods are failing to start 2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting 2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English 2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2017427 - NTO does not restart TuneD daemon when profile application is taking too long 2017535 - Broken Argo CD link image on GitOps Details Page 2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references 2017564 - On-prem prepender dispatcher script overwrites DNS search settings 2017565 - CCMO does not handle additionalTrustBundle on Azure Stack 2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice 2017606 - [e2e][automation] add test to verify send key for VNC console 2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes 2017656 - VM IP address is "undefined" under VM details -> ssh field 2017663 - SSH password authentication is disabled when public key is not supplied 2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP 2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set 2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource 2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults 2017761 - [e2e][automation] dummy bug for 4.9 test dependency 2017872 - Add Sprint 209 translations 2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances 2017879 - Add Chinese translation for "alternate" 2017882 - multus: add handling of pod UIDs passed from runtime 2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods 2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI 2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS 2018094 - the tooltip length is limited 2018152 - CNI pod is not restarted when It cannot start servers due to ports being used 2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time 2018234 - user settings are saved in local storage instead of on cluster 2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?) 2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports) 2018275 - Topology graph doesn't show context menu for Export CSV 2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked 2018380 - Migrate docs links to access.redhat.com 2018413 - Error: context deadline exceeded, OCP 4.8.9 2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked 2018445 - [e2e][automation] enhance tests for downstream 2018446 - [e2e][automation] move tests to different level 2018449 - [e2e][automation] add test about create/delete network attachment definition 2018490 - [4.10] Image provisioning fails with file name too long 2018495 - Fix typo in internationalization README 2018542 - Kernel upgrade does not reconcile DaemonSet 2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit 2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes 2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950 2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10 2018985 - The rootdisk size is 15Gi of windows VM in customize wizard 2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. 2019096 - Update SRO leader election timeout to support SNO 2019129 - SRO in operator hub points to wrong repo for README 2019181 - Performance profile does not apply 2019198 - ptp offset metrics are not named according to the log output 2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest 2019284 - Stop action should not in the action list while VMI is not running 2019346 - zombie processes accumulation and Argument list too long 2019360 - [RFE] Virtualization Overview page 2019452 - Logger object in LSO appends to existing logger recursively 2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect 2019634 - Pause and migration is enabled in action list for a user who has view only permission 2019636 - Actions in VM tabs should be disabled when user has view only permission 2019639 - "Take snapshot" should be disabled while VM image is still been importing 2019645 - Create button is not removed on "Virtual Machines" page for view only user 2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user 2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user 2019717 - cant delete VM with un-owned pvc attached 2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass 2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always" 2019744 - [RFE] Suggest users to download newest RHEL 8 version 2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level 2019827 - Display issue with top-level menu items running demo plugin 2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded 2019886 - Kuryr unable to finish ports recovery upon controller restart 2019948 - [RFE] Restructring Virtualization links 2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster 2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout 2019986 - Dynamic demo plugin fails to build 2019992 - instance:node_memory_utilisation:ratio metric is incorrect 2020001 - Update dockerfile for demo dynamic plugin to reflect dir change 2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation. 2020107 - cluster-version-operator: remove runlevel from CVO namespace 2020153 - Creation of Windows high performance VM fails 2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public 2020250 - Replacing deprecated ioutil 2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build 2020275 - ClusterOperators link in console returns blank page during upgrades 2020377 - permissions error while using tcpdump option with must-gather 2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined 2020498 - "Show PromQL" button is disabled 2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature 2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI 2020664 - DOWN subports are not cleaned up 2020904 - When trying to create a connection from the Developer view between VMs, it fails 2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana 2021017 - 404 page not found error on knative eventing page 2021031 - QE - Fix the topology CI scripts 2021048 - [RFE] Added MAC Spoof check 2021053 - Metallb operator presented as community operator 2021067 - Extensive number of requests from storage version operator in cluster 2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes 2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass 2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node 2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating 2021152 - imagePullPolicy is "Always" for ptp operator images 2021191 - Project admins should be able to list available network attachment defintions 2021205 - Invalid URL in git import form causes validation to not happen on URL change 2021322 - cluster-api-provider-azure should populate purchase plan information 2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind 2021364 - Installer requires invalid AWS permission s3:GetBucketReplication 2021400 - Bump documentationBaseURL to 4.10 2021405 - [e2e][automation] VM creation wizard Cloud Init editor 2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected 2021466 - [e2e][automation] Windows guest tool mount 2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver 2021551 - Build is not recognizing the USER group from an s2i image 2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character 2021629 - api request counts for current hour are incorrect 2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page 2021693 - Modals assigned modal-lg class are no longer the correct width 2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines 2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled 2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags 2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem 2022053 - dpdk application with vhost-net is not able to start 2022114 - Console logging every proxy request 2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent) 2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long 2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2022509 - getOverrideForManifest does not check manifest.GVK.Group 2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard 2022627 - Machine object not picking up external FIP added to an openstack vm 2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:' 2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox 2022801 - Add Sprint 210 translations 2022811 - Fix kubelet log rotation file handle leak 2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations 2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2022880 - Pipeline renders with minor visual artifact with certain task dependencies 2022886 - Incorrect URL in operator description 2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config 2023060 - [e2e][automation] Windows VM with CDROM migration 2023077 - [e2e][automation] Home Overview Virtualization status 2023090 - [e2e][automation] Examples of Import URL for VM templates 2023102 - [e2e][automation] Cloudinit disk of VM from custom template 2023216 - ACL for a deleted egressfirewall still present on node join switch 2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9 2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy 2023342 - SCC admission should take ephemeralContainers into account 2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden) 2023434 - Update Azure Machine Spec API to accept Marketplace Images 2023500 - Latency experienced while waiting for volumes to attach to node 2023522 - can't remove package from index: database is locked 2023560 - "Network Attachment Definitions" has no project field on the top in the list view 2023592 - [e2e][automation] add mac spoof check for nad 2023604 - ACL violation when deleting a provisioning-configuration resource 2023607 - console returns blank page when normal user without any projects visit Installed Operators page 2023638 - Downgrade support level for extended control plane integration to Dev Preview 2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10 2023675 - Changing CNV Namespace 2023779 - Fix Patch 104847 in 4.9 2023781 - initial hardware devices is not loading in wizard 2023832 - CCO updates lastTransitionTime for non-Status changes 2023839 - Bump recommended FCOS to 34.20211031.3.0 2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly 2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository 2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8 2024055 - External DNS added extra prefix for the TXT record 2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully 2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json 2024199 - 400 Bad Request error for some queries for the non admin user 2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode 2024262 - Sample catalog is not displayed when one API call to the backend fails 2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability 2024316 - modal about support displays wrong annotation 2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected 2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page 2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view 2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined 2024515 - test-blocker: Ceph-storage-plugin tests failing 2024535 - hotplug disk missing OwnerReference 2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image 2024547 - Detail page is breaking for namespace store , backing store and bucket class. 2024551 - KMS resources not getting created for IBM FlashSystem storage 2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel 2024613 - pod-identity-webhook starts without tls 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded 2024665 - Bindable services are not shown on topology 2024731 - linuxptp container: unnecessary checking of interfaces 2024750 - i18n some remaining OLM items 2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured 2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack 2024841 - test Keycloak with latest tag 2024859 - Not able to deploy an existing image from private image registry using developer console 2024880 - Egress IP breaks when network policies are applied 2024900 - Operator upgrade kube-apiserver 2024932 - console throws "Unauthorized" error after logging out 2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up 2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick 2025230 - ClusterAutoscalerUnschedulablePods should not be a warning 2025266 - CreateResource route has exact prop which need to be removed 2025301 - [e2e][automation] VM actions availability in different VM states 2025304 - overwrite storage section of the DV spec instead of the pvc section 2025431 - [RFE]Provide specific windows source link 2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36 2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node 2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local 2025481 - Update VM Snapshots UI 2025488 - [DOCS] Update the doc for nmstate operator installation 2025592 - ODC 4.9 supports invalid devfiles only 2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings" 2025767 - VMs orphaned during machineset scaleup 2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard 2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin- 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2026560 - Cluster-version operator does not remove unrecognized volume mounts 2026699 - fixed a bug with missing metadata 2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator 2026898 - Description/details are missing for Local Storage Operator 2027132 - Use the specific icon for Fedora and CentOS template 2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend 2027272 - KubeMemoryOvercommit alert should be human readable 2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group 2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue) 2027299 - The status of checkbox component is not revealed correctly in code 2027311 - K8s watch hooks do not work when fetching core resources 2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation 2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images 2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation 2027498 - [IBMCloud] SG Name character length limitation 2027501 - [4.10] Bootimage bump tracker 2027524 - Delete Application doesn't delete Channels or Brokers 2027563 - e2e/add-flow-ci.feature fix accessibility violations 2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges 2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions 2027685 - openshift-cluster-csi-drivers pods crashing on PSI 2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced 2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string 2027917 - No settings in hostfirmwaresettings and schema objects for masters 2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf 2027982 - nncp stucked at ConfigurationProgressing 2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters 2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed 2028030 - Panic detected in cluster-image-registry-operator pod 2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found" 2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9 2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin 2028141 - Console tests doesn't pass on Node.js 15 and 16 2028160 - Remove i18nKey in network-policy-peer-selectors.tsx 2028162 - Add Sprint 210 translations 2028170 - Remove leading and trailing whitespace 2028174 - Add Sprint 210 part 2 translations 2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it 2028217 - Cluster-version operator does not default Deployment replicas to one 2028240 - Multiple CatalogSources causing higher CPU use than necessary 2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings 2028325 - disableDrain should be set automatically on SNO 2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel 2028531 - Missing netFilter to the list of parameters when platform is OpenStack 2028610 - Installer doesn't retry on GCP rate limiting 2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting 2028695 - destroy cluster does not prune bootstrap instance profile 2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs 2028802 - CRI-O panic due to invalid memory address or nil pointer dereference 2028816 - VLAN IDs not released on failures 2028881 - Override not working for the PerformanceProfile template 2028885 - Console should show an error context if it logs an error object 2028949 - Masthead dropdown item hover text color is incorrect 2028963 - Whereabouts should reconcile stranded IP addresses 2029034 - enabling ExternalCloudProvider leads to inoperative cluster 2029178 - Create VM with wizard - page is not displayed 2029181 - Missing CR from PGT 2029273 - wizard is not able to use if project field is "All Projects" 2029369 - Cypress tests github rate limit errors 2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out 2029394 - missing empty text for hardware devices at wizard review 2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used 2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl 2029521 - EFS CSI driver cannot delete volumes under load 2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle 2029579 - Clicking on an Application which has a Helm Release in it causes an error 2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE 2029645 - Sync upstream 1.15.0 downstream 2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing 2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip 2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage 2029785 - CVO panic when an edge is included in both edges and conditionaledges 2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp) 2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error 2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2030228 - Fix StorageSpec resources field to use correct API 2030229 - Mirroring status card reflect wrong data 2030240 - Hide overview page for non-privileged user 2030305 - Export App job do not completes 2030347 - kube-state-metrics exposes metrics about resource annotations 2030364 - Shared resource CSI driver monitoring is not setup correctly 2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets 2030534 - Node selector/tolerations rules are evaluated too early 2030539 - Prometheus is not highly available 2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing 2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation 2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates. 2030677 - BOND CNI: There is no option to configure MTU on a Bond interface 2030692 - NPE in PipelineJobListener.upsertWorkflowJob 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2030847 - PerformanceProfile API version should be v2 2030961 - Customizing the OAuth server URL does not apply to upgraded cluster 2031006 - Application name input field is not autofocused when user selects "Create application" 2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex 2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue 2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip 2031060 - Failing CSR Unit test due to expired test certificate 2031085 - ovs-vswitchd running more threads than expected 2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1 2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2031502 - [RFE] New common templates crash the ui 2031685 - Duplicated forward upstreams should be removed from the dns operator 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive 2031797 - [RFE] Order and text of Boot source type input are wrong 2031826 - CI tests needed to confirm driver-toolkit image contents 2031831 - OCP Console - Global CSS overrides affecting dynamic plugins 2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional 2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled) 2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain) 2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself 2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource 2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64 2032141 - open the alertrule link in new tab, got empty page 2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy 2032296 - Cannot create machine with ephemeral disk on Azure 2032407 - UI will show the default openshift template wizard for HANA template 2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded 2032421 - [RFE] UI integration with automatic updated images 2032516 - Not able to import git repo with .devfile.yaml 2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource 2032547 - hardware devices table have filter when table is empty 2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool 2032566 - Cluster-ingress-router does not support Azure Stack 2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso 2032589 - DeploymentConfigs ignore resolve-names annotation 2032732 - Fix styling conflicts due to recent console-wide CSS changes 2032831 - Knative Services and Revisions are not shown when Service has no ownerReference 2032851 - Networking is "not available" in Virtualization Overview 2032926 - Machine API components should use K8s 1.23 dependencies 2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24 2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster 2033013 - Project dropdown in user preferences page is broken 2033044 - Unable to change import strategy if devfile is invalid 2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable 2033111 - IBM VPC operator library bump removed global CLI args 2033138 - "No model registered for Templates" shows on customize wizard 2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected 2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected 2033257 - unable to use configmap for helm charts 2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered 2033290 - Product builds for console are failing 2033382 - MAPO is missing machine annotations 2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations 2033403 - Devfile catalog does not show provider information 2033404 - Cloud event schema is missing source type and resource field is using wrong value 2033407 - Secure route data is not pre-filled in edit flow form 2033422 - CNO not allowing LGW conversion from SGW in runtime 2033434 - Offer darwin/arm64 oc in clidownloads 2033489 - CCM operator failing on baremetal platform 2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver 2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains 2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady 2033538 - Gather Cost Management Metrics Custom Resource 2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined 2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page 2033634 - list-style-type: disc is applied to the modal dropdowns 2033720 - Update samples in 4.10 2033728 - Bump OVS to 2.16.0-33 2033729 - remove runtime request timeout restriction for azure 2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended 2033749 - Azure Stack Terraform fails without Local Provider 2033750 - Local volume should pull multi-arch image for kube-rbac-proxy 2033751 - Bump kubernetes to 1.23 2033752 - make verify fails due to missing yaml-patch 2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource 2034004 - [e2e][automation] add tests for VM snapshot improvements 2034068 - [e2e][automation] Enhance tests for 4.10 downstream 2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore 2034097 - [OVN] After edit EgressIP object, the status is not correct 2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning 2034129 - blank page returned when clicking 'Get started' button 2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0 2034153 - CNO does not verify MTU migration for OpenShiftSDN 2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled 2034170 - Use function.knative.dev for Knative Functions related labels 2034190 - unable to add new VirtIO disks to VMs 2034192 - Prometheus fails to insert reporting metrics when the sample limit is met 2034243 - regular user cant load template list 2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version" 2034248 - GPU/Host device modal is too small 2034257 - regular user Create VM missing permissions alert 2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere 2034300 - Du validator policy is NonCompliant after DU configuration completed 2034319 - Negation constraint is not validating packages 2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology 2034350 - The CNO should implement the Whereabouts IP reconciliation cron job 2034362 - update description of disk interface 2034398 - The Whereabouts IPPools CRD should include the podref field 2034409 - Default CatalogSources should be pointing to 4.10 index images 2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics 2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode 2034460 - Summary: cloud-network-config-controller does not account for different environment 2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true 2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly 2034493 - Change cluster version operator log level 2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list 2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6 2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer 2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART 2034537 - Update team 2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds 2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success 2034577 - Current OVN gateway mode should be reflected on node annotation as well 2034621 - context menu not popping up for application group 2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10 2034624 - Warn about unsupported CSI driver in vsphere operator 2034647 - missing volumes list in snapshot modal 2034648 - Rebase openshift-controller-manager to 1.23 2034650 - Rebase openshift/builder to 1.23 2034705 - vSphere: storage e2e tests logging configuration data 2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. 2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment 2034785 - ptpconfig with summary_interval cannot be applied 2034823 - RHEL9 should be starred in template list 2034838 - An external router can inject routes if no service is added 2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent 2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty 2034881 - Cloud providers components should use K8s 1.23 dependencies 2034884 - ART cannot build the image because it tries to download controller-gen 2034889 - oc adm prune deployments does not work 2034898 - Regression in recently added Events feature 2034957 - update openshift-apiserver to kube 1.23.1 2035015 - ClusterLogForwarding CR remains stuck remediating forever 2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster 2035141 - [RFE] Show GPU/Host devices in template's details tab 2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC 2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting 2035199 - IPv6 support in mtu-migration-dispatcher.yaml 2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing 2035250 - Peering with ebgp peer over multi-hops doesn't work 2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices 2035315 - invalid test cases for AWS passthrough mode 2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env 2035321 - Add Sprint 211 translations 2035326 - [ExternalCloudProvider] installation with additional network on workers fails 2035328 - Ccoctl does not ignore credentials request manifest marked for deletion 2035333 - Kuryr orphans ports on 504 errors from Neutron 2035348 - Fix two grammar issues in kubevirt-plugin.json strings 2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets 2035409 - OLM E2E test depends on operator package that's no longer published 2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address 2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1' 2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster 2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page 2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers 2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class 2035602 - [e2e][automation] add tests for Virtualization Overview page cards 2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly 2035704 - RoleBindings list page filter doesn't apply 2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing. 2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed 2035772 - AccessMode and VolumeMode is not reserved for customize wizard 2035847 - Two dashes in the Cronjob / Job pod name 2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml 2035882 - [BIOS setting values] Create events for all invalid settings in spec 2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests” 2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen 2035927 - Cannot enable HighNodeUtilization scheduler profile 2035933 - volume mode and access mode are empty in customize wizard review tab 2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods 2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation 2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error 2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential 2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend 2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes 2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23 2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23 2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments 2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists 2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected 2036826 - oc adm prune deployments can prune the RC/RS 2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform 2036861 - kube-apiserver is degraded while enable multitenant 2036937 - Command line tools page shows wrong download ODO link 2036940 - oc registry login fails if the file is empty or stdout 2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container 2036989 - Route URL copy to clipboard button wraps to a separate line by itself 2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters 2036993 - Machine API components should use Go lang version 1.17 2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. 2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api 2037073 - Alertmanager container fails to start because of startup probe never being successful 2037075 - Builds do not support CSI volumes 2037167 - Some log level in ibm-vpc-block-csi-controller are hard code 2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles 2037182 - PingSource badge color is not matched with knativeEventing color 2037203 - "Running VMs" card is too small in Virtualization Overview 2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly 2037237 - Add "This is a CD-ROM boot source" to customize wizard 2037241 - default TTL for noobaa cache buckets should be 0 2037246 - Cannot customize auto-update boot source 2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately 2037288 - Remove stale image reference 2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources 2037483 - Rbacs for Pods within the CBO should be more restrictive 2037484 - Bump dependencies to k8s 1.23 2037554 - Mismatched wave number error message should include the wave numbers that are in conflict 2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform] 2037635 - impossible to configure custom certs for default console route in ingress config 2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8 2037638 - Builds do not support CSI volumes as volume sources 2037664 - text formatting issue in Installed Operators list table 2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037801 - Serverless installation is failing on CI jobs for e2e tests 2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format 2037856 - use lease for leader election 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10 2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests 2037904 - upgrade operator deployment failed due to memory limit too low for manager container 2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation] 2038034 - non-privileged user cannot see auto-update boot source 2038053 - Bump dependencies to k8s 1.23 2038088 - Remove ipa-downloader references 2038160 - The default project missed the annotation : openshift.io/node-selector: "" 2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional 2038196 - must-gather is missing collecting some metal3 resources 2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777) 2038253 - Validator Policies are long lived 2038272 - Failures to build a PreprovisioningImage are not reported 2038384 - Azure Default Instance Types are Incorrect 2038389 - Failing test: [sig-arch] events should not repeat pathologically 2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket 2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips 2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained 2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect 2038663 - update kubevirt-plugin OWNERS 2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new" 2038705 - Update ptp reviewers 2038761 - Open Observe->Targets page, wait for a while, page become blank 2038768 - All the filters on the Observe->Targets page can't work 2038772 - Some monitors failed to display on Observe->Targets page 2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node 2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard 2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation 2038864 - E2E tests fail because multi-hop-net was not created 2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console 2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured 2038968 - Move feature gates from a carry patch to openshift/api 2039056 - Layout issue with breadcrumbs on API explorer page 2039057 - Kind column is not wide enough in API explorer page 2039064 - Bulk Import e2e test flaking at a high rate 2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled 2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters 2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost 2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy 2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator 2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade 2039227 - Improve image customization server parameter passing during installation 2039241 - Improve image customization server parameter passing during installation 2039244 - Helm Release revision history page crashes the UI 2039294 - SDN controller metrics cannot be consumed correctly by prometheus 2039311 - oc Does Not Describe Build CSI Volumes 2039315 - Helm release list page should only fetch secrets for deployed charts 2039321 - SDN controller metrics are not being consumed by prometheus 2039330 - Create NMState button doesn't work in OperatorHub web console 2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations 2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead 2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced 2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message 2040150 - Update ConfigMap keys for IBM HPCS 2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth 2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository 2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp 2040376 - "unknown instance type" error for supported m6i.xlarge instance 2040394 - Controller: enqueue the failed configmap till services update 2040467 - Cannot build ztp-site-generator container image 2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4 2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps 2040535 - Auto-update boot source is not available in customize wizard 2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name 2040603 - rhel worker scaleup playbook failed because missing some dependency of podman 2040616 - rolebindings page doesn't load for normal users 2040620 - [MAPO] Error pulling MAPO image on installation 2040653 - Topology sidebar warns that another component is updated while rendering 2040655 - User settings update fails when selecting application in topology sidebar 2040661 - Different react warnings about updating state on unmounted components when leaving topology 2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation 2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi 2040694 - Three upstream HTTPClientConfig struct fields missing in the operator 2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers 2040710 - cluster-baremetal-operator cannot update BMC subscription CR 2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms 2040782 - Import YAML page blocks input with more then one generateName attribute 2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute 2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator 2040793 - Fix snapshot e2e failures 2040880 - do not block upgrades if we can't connect to vcenter 2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10 2041093 - autounattend.xml missing 2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates 2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped" 2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23 2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller 2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener 2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud 2041466 - Kubedescheduler version is missing from the operator logs 2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses 2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods) 2041492 - Spacing between resources in inventory card is too small 2041509 - GCP Cloud provider components should use K8s 1.23 dependencies 2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook 2041541 - audit: ManagedFields are dropped using API not annotation 2041546 - ovnkube: set election timer at RAFT cluster creation time 2041554 - use lease for leader election 2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected" 2041583 - etcd and api server cpu mask interferes with a guaranteed workload 2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure 2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation 2041620 - bundle CSV alm-examples does not parse 2041641 - Fix inotify leak and kubelet retaining memory 2041671 - Delete templates leads to 404 page 2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category 2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled 2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint 2041763 - The Observe > Alerting pages no longer have their default sort order applied 2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken 2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied 2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster 2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases 2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist 2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen 2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile 2041999 - [PROXY] external dns pod cannot recognize custom proxy CA 2042001 - unexpectedly found multiple load balancers 2042029 - kubedescheduler fails to install completely 2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters 2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs 2042059 - update discovery burst to reflect lots of CRDs on openshift clusters 2042069 - Revert toolbox to rhcos-toolbox 2042169 - Can not delete egressnetworkpolicy in Foreground propagation 2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud 2042274 - Storage API should be used when creating a PVC 2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection 2042366 - Lifecycle hooks should be independently managed 2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway 2042382 - [e2e][automation] CI takes more then 2 hours to run 2042395 - Add prerequisites for active health checks test 2042438 - Missing rpms in openstack-installer image 2042466 - Selection does not happen when switching from Topology Graph to List View 2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver 2042567 - insufficient info on CodeReady Containers configuration 2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk 2042619 - Overview page of the console is broken for hypershift clusters 2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running 2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud 2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud 2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly 2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring) 2042851 - Create template from SAP HANA template flow - VM is created instead of a new template 2042906 - Edit machineset with same machine deletion hook name succeed 2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal" 2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways' 2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2043043 - Cluster Autoscaler should use K8s 1.23 dependencies 2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props) 2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects". 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2044201 - Templates golden image parameters names should be supported 2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8] 2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied" 2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects 2044347 - Bump to kubernetes 1.23.3 2044481 - collect sharedresource cluster scoped instances with must-gather 2044496 - Unable to create hardware events subscription - failed to add finalizers 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2044680 - Additional libovsdb performance and resource consumption fixes 2044704 - Observe > Alerting pages should not show runbook links in 4.10 2044717 - [e2e] improve tests for upstream test environment 2044724 - Remove namespace column on VM list page when a project is selected 2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff 2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD 2045024 - CustomNoUpgrade alerts should be ignored 2045112 - vsphere-problem-detector has missing rbac rules for leases 2045199 - SnapShot with Disk Hot-plug hangs 2045561 - Cluster Autoscaler should use the same default Group value as Cluster API 2045591 - Reconciliation of aws pod identity mutating webhook did not happen 2045849 - Add Sprint 212 translations 2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10 2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin 2045916 - [IBMCloud] Default machine profile in installer is unreliable 2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment 2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify 2046137 - oc output for unknown commands is not human readable 2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance 2046297 - Bump DB reconnect timeout 2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations 2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors 2046626 - Allow setting custom metrics for Ansible-based Operators 2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud 2047025 - Installation fails because of Alibaba CSI driver operator is degraded 2047190 - Bump Alibaba CSI driver for 4.10 2047238 - When using communities and localpreferences together, only localpreference gets applied 2047255 - alibaba: resourceGroupID not found 2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions 2047317 - Update HELM OWNERS files under Dev Console 2047455 - [IBM Cloud] Update custom image os type 2047496 - Add image digest feature 2047779 - do not degrade cluster if storagepolicy creation fails 2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047929 - use lease for leader election 2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2048048 - Application tab in User Preferences dropdown menus are too wide. 2048050 - Topology list view items are not highlighted on keyboard navigation 2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2048413 - Bond CNI: Failed to attach Bond NAD to pod 2048443 - Image registry operator panics when finalizes config deletion 2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-* 2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2048598 - Web terminal view is broken 2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2048891 - Topology page is crashed 2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2049043 - Cannot create VM from template 2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2049886 - Placeholder bug for OCP 4.10.0 metadata release 2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members 2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2050370 - alert data for burn budget needs to be updated to prevent regression 2050393 - ZTP missing support for local image registry and custom machine config 2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2050737 - Remove metrics and events for master port offsets 2050801 - Vsphere upi tries to access vsphere during manifests generation phase 2050883 - Logger object in LSO does not log source location accurately 2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 2052062 - Whereabouts should implement client-go 1.22+ 2052125 - [4.10] Crio appears to be coredumping in some scenarios 2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch 2052756 - [4.10] PVs are not being cleaned up after PVC deletion 2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2053268 - inability to detect static lifecycle failure 2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053323 - OpenShift-Ansible BYOH Unit Tests are Broken 2053339 - Remove dev preview badge from IBM FlashSystem deployment windows 2053751 - ztp-site-generate container is missing convenience entrypoint 2053945 - [4.10] Failed to apply sriov policy on intel nics 2054109 - Missing "app" label 2054154 - RoleBinding in project without subject is causing "Project access" page to fail 2054244 - Latest pipeline run should be listed on the top of the pipeline run list 2054288 - console-master-e2e-gcp-console is broken 2054562 - DPU network operator 4.10 branch need to sync with master 2054897 - Unable to deploy hw-event-proxy operator 2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2055371 - Remove Check which enforces summary_interval must match logSyncInterval 2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API 2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2056479 - ovirt-csi-driver-node pods are crashing intermittently 2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope" 2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes" 2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2056948 - post 1.23 rebase: regression in service-load balancer reliability 2057438 - Service Level Agreement (SLA) always show 'Unknown' 2057721 - Fix Proxy support in RHACM 2.4.2 2057724 - Image creation fails when NMstateConfig CR is empty 2058641 - [4.10] Pod density test causing problems when using kube-burner 2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc

  1. References:

https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

Installation note:

Safari 13.0.3 may be obtained from the Mac App Store. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state

  1. 8) - aarch64, ppc64le, s390x, x86_64

  2. Bugs fixed (https://bugzilla.redhat.com/):

1207179 - Select items matching non existing pattern does not unselect already selected 1566027 - can't correctly compute contents size if hidden files are included 1569868 - Browsing samba shares using gvfs is very slow 1652178 - [RFE] perf-tool run on wayland 1656262 - The terminal's character display is unclear on rhel8 guest after installing gnome 1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled 1692536 - login screen shows after gnome-initial-setup 1706008 - Sound Effect sometimes fails to change to selected option. 1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. 1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined 1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly 1758891 - tracker-devel subpackage missing from el8 repos 1775345 - Rebase xdg-desktop-portal to 1.6 1778579 - Nautilus does not respect umask settings. 1779691 - Rebase xdg-desktop-portal-gtk to 1.6 1794045 - There are two different high contrast versions of desktop icons 1804719 - Update vte291 to 0.52.4 1805929 - RHEL 8.1 gnome-shell-extension errors 1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp 1814820 - No checkbox to install updates in the shutdown dialog 1816070 - "search for an application to open this file" dialog broken 1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution 1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1817143 - Rebase WebKitGTK to 2.28 1820759 - Include IO stall fixes 1820760 - Include IO fixes 1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening 1827030 - gnome-settings-daemon: subscription notification on CentOS Stream 1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content 1832347 - [Rebase] Rebase pipewire to 0.3.x 1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install 1837381 - Backport screen cast improvements to 8.3 1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version 1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6 1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113 1840080 - Can not control top bar menus via keys in Wayland 1840788 - [flatpak][rhel8] unable to build potrace as dependency 1843486 - Software crash after clicking Updates tab 1844578 - anaconda very rarely crashes at startup with a pygobject traceback 1846191 - usb adapters hotplug crashes gnome-shell 1847051 - JS ERROR: TypeError: area is null 1847061 - File search doesn't work under certain locales 1847062 - gnome-remote-desktop crash on QXL graphics 1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV 1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow 1854734 - PipeWire 0.2 should be required by xdg-desktop-portal 1866332 - Remove obsolete libusb-devel dependency 1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at "Started GNOME Display Manager" - GDM regression issue

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1851",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.3"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.2"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "6.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2019-8808",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8808",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160243",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8808",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8808",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201910-1777",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160243",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8808",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple watchOS earlier than 6.1; Safari earlier than 13.0.3; iOS earlier than 13.2; iPadOS earlier than 13.2; tvOS earlier than 13.2; Windows-based iTunes version 12.10.2. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. \n1865938 - CSIDrivers missing in OCS 4.6\n1867024 - [ocs-operator] operator v4.6.0-519.ci is in Installing state\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868060 - [External Cluster] Noobaa-default-backingstore PV in released state upon OCS 4.5 uninstall (Secret not found)\n1868703 - [rbd] After volume expansion, the new size is not reflected on the pod\n1869411 - capture full crash information from ceph\n1870061 - [RHEL][IBM] OCS un-install should make the devices raw\n1870338 - OCS 4.6 must-gather : ocs-must-gather-xxx-helper pod in ContainerCreationError (couldn\u0027t find key admin-secret)\n1870631 - OCS 4.6 Deployment : RGW pods went into \u0027CrashLoopBackOff\u0027 state on Z Platform\n1872119 - Updates don\u0027t work on StorageClass which will keep PV expansion disabled for upgraded cluster\n1872696 - [ROKS][RFE]NooBaa Configure IBM COS as default backing store\n1873864 - Noobaa: On an baremetal RHCOS cluster, some backingstores are stuck in PROGRESSING state with INVALID_ENDPOINT TemporaryError\n1874606 - CVE-2020-7720 nodejs-node-forge: prototype pollution via the util.setPath function\n1875476 - Change noobaa logo in the noobaa UI\n1877339 - Incorrect use of logr\n1877371 - NooBaa UI warning message on Deploy Kubernetes Pool process - typo and shown number is incorrect\n1878153 - OCS 4.6 must-gather: collect node information under cluster_scoped_resources/oc_output directory\n1878714 - [FIPS enabled] BadDigest error on file upload to noobaa bucket\n1878853 - [External Mode] ceph-external-cluster-details-exporter.py  does not tolerate TLS enabled RGW\n1879008 - ocs-osd-removal job fails because it can\u0027t find admin-secret in rook-ceph-mon secret\n1879072 - Deployment with encryption at rest is failing to bring up OSD pods\n1879919 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed\n1880255 - Collect rbd info and subvolume info and snapshot info command output\n1881028 - CVE-2020-8237 nodejs-json-bigint: Prototype pollution via `__proto__` assignment could result in DoS\n1881071 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed\n1882397 - MCG decompression problem with snappy on s390x arch\n1883253 - CSV doesn\u0027t contain values required for UI to enable minimal deployment and cluster encryption\n1883398 - Update csi sidecar containers in rook\n1883767 - Using placement strategies in cluster-service.yaml causes ocs-operator to crash\n1883810 - [External mode]  RGW metrics is not available after OCS upgrade from 4.5 to 4.6\n1883927 - Deployment with encryption at rest is failing to bring up OSD pods\n1885175 - Handle disappeared underlying device for encrypted OSD\n1885428 - panic seen in rook-ceph during uninstall - \"close of closed channel\"\n1885648 - [Tracker for https://bugzilla.redhat.com/show_bug.cgi?id=1885700] FSTYPE for localvolumeset devices shows up as ext2 after uninstall\n1885971 - ocs-storagecluster-cephobjectstore doesn\u0027t report true state of RGW\n1886308 - Default VolumeSnapshot Classes not created in External Mode\n1886348 - osd removal job failed with status \"Error\"\n1886551 - Clone creation failed after timeout of 5 hours of Azure platrom for 3 CephFS PVCs ( PVC sizes: 1, 25 and 100 GB)\n1886709 - [External] RGW storageclass disappears after upgrade from OCS 4.5 to 4.6\n1886859 - OCS 4.6: Uninstall stuck indefinitely if any Ceph pods are in Pending state before uninstall\n1886873 - [OCS 4.6 External/Internal Uninstall] - Storage Cluster deletion stuck indefinitely, \"failed to delete object store\", remaining users: [noobaa-ceph-objectstore-user]\n1888583 - [External] When deployment is attempted without specifying the monitoring-endpoint while generating JSON, the CSV is stuck in installing state\n1888593 - [External] Add validation for monitoring-endpoint and port in the exporter script\n1888614 - [External] Unreachable monitoring-endpoint used during deployment causes ocs-operator to crash\n1889441 - Traceback error message while running OCS 4.6 must-gather\n1889683 - [GSS] Noobaa Problem when setting public access to a bucket\n1889866 - Post node power off/on, an unused MON PVC still stays back in the cluster\n1890183 - [External] ocs-operator logs are filled with \"failed to reconcile metrics exporter\"\n1890638 - must-gather helper pod should be deleted after collecting ceph crash info\n1890971 - [External] RGW metrics are not available if anything else except 9283 is provided as the monitoring-endpoint-port\n1891856 - ocs-metrics-exporter pod should have tolerations for OCS taint\n1892206 - [GSS] Ceph image/version mismatch\n1892234 - clone #95 creation failed for CephFS PVC ( 10 GB PVC size) during multiple clones creation test\n1893624 - Must Gather is not collecting the tar file from NooBaa diagnose\n1893691 - OCS4.6 must_gather failes to complete in 600sec\n1893714 - Bad response for upload an object with encryption\n1895402 - Mon pods didn\u0027t get upgraded in 720 second timeout from OCS 4.5 upgrade to 4.6\n1896298 - [RFE] Monitoring for Namespace buckets and resources\n1896831 - Clone#452 for RBD PVC ( PVC size 1 GB) failed to be created for 600 secs\n1898521 - [CephFS] Deleting cephfsplugin pod along with app pods will make PV remain in Released state after deleting the PVC\n1902627 - must-gather should wait for debug pods to be in ready state\n1904171 - RGW Service is unavailable for a short period during upgrade to OCS 4.6\n\n5. \n\nBug Fix(es):\n* NVD feed fixed in Clair-v2 (clair-jwt image)\n\n3. Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-29-3 tvOS 13.2\n\ntvOS 13.2 is now available and addresses the following:\n\nAccounts\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at\nTechnische Universit\u00e4t Darmstadt\n\nApp Store\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A local attacker may be able to login to the account of a\npreviously logged in user without valid credentials. \nCVE-2019-8803: Kiyeon An, \ucc28\ubbfc\uaddc (CHA Minkyu)\n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8785: Ian Beer of Google Project Zero\nCVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure\n\nAVEVideoEncoder\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure\n\nFile System Events\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8798: ABC Research s.r.o. working with Trend Micro\u0027s Zero\nDay Initiative\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to read restricted memory\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8782: Cheolung Lee of LINE+ Security Team\nCVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team\nCVE-2019-8808: found by OSS-Fuzz\nCVE-2019-8811: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8812: an anonymous researcher\nCVE-2019-8814: Cheolung Lee of LINE+ Security Team\nCVE-2019-8816: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8819: Cheolung Lee of LINE+ Security Team\nCVE-2019-8820: Samuel Gro\u00df of Google Project Zero\nCVE-2019-8821: Sergei Glazunov of Google Project Zero\nCVE-2019-8822: Sergei Glazunov of Google Project Zero\nCVE-2019-8823: Sergei Glazunov of Google Project Zero\n\nWebKit Process Model\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2019-8815: Apple\n\nAdditional recognition\n\nCFNetwork\nWe would like to acknowledge Lily Chen of Google for their\nassistance. \n\nKernel\nWe would like to acknowledge Jann Horn of Google Project Zero for\ntheir assistance. \n\nWebKit\nWe would like to acknowledge Dlive of Tencent\u0027s Xuanwu Lab and Zhiyi\nZhang of Codesafe Team of Legendsec at Qi\u0027anxin Group for their\nassistance. \n\nInstallation note:\n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting\n\"Settings -\u003e System -\u003e Software Update -\u003e Update Software.\"\n\nTo check the current version of software, select\n\"Settings -\u003e General -\u003e About.\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt\nc2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2DIQ/9\nFQmnN+1/tdXaFFI1PtdJ9hgXONcdsi+D05mREDTX7v0VaLzChX/N3DccI00Z1uT5\nVNKHRjInGYDZoO/UntzAWoZa+tcueaY23XhN9xTYrUlt1Ol1gIsaxTEgPtax4B9A\nPoqWb6S+oK1SHUxglGnlLtXkcyt3WHJ5iqan7BM9XX6dsriwgoBgKADpFi3FCXoa\ncFIvpoM6ZhxYyMPpxmMc1IRwgjDwOn2miyjkSaAONXw5R5YGRxSsjq+HkzYE3w1m\nm2NZElUB1nRmlyuU3aMsHUTxwAnfzryPiHRGUTcNZao39YBsyWz56sr3++g7qmnD\nuZZzBnISQpC6oJCWclw3UHcKHH+V0+1q059GHBoku6Xmkc5bPRnKdFgSf5OvyQUw\nXGjwL5UbGB5eTtdj/Kx5Rd/m5fFIUxVu7HB3bGQGhYHIc9iTdi9j3mCd3nOHCIEj\nRe07c084jl2Git4sH2Tva7tOqFyI2IyNVJ0LjBXO54fAC2mtFz3mkDFxCEzL5V92\nO/Wct2T6OpYghzkrOOlUEAQJTbwJjTZBWsUubcOoJo6P9JUPBDJKB0ibAaCWrt9I\n8OU5wRr3q0fTA3N/qdGGbQ/tgUwiMHGuqrHMv0XYGPfO5Qg5GuHpTYchZrP5nFwf\nziQuQtO92b1FA4sDI+ue1sDIG84tPrkTAeLmveBjezc=\n=KsmX\n-----END PGP SIGNATURE-----\n\n\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID:       RHSA-2022:0056-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0056\nIssue date:        2022-03-10\nCVE Names:         CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n                   CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n                   CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n                   CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n                   CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n                   CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n                   CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n                   CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n                   CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n                   CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n                   CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n                   CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n                   CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n                   CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n                   CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n                   CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n                   CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n                   CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n                   CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n                   CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n                   CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n                   CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n                   CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027  is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027  is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\"  in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027  is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi  after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal]  The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs -  fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log  of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI  OSP16 OVN  IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027  ENV  which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI  start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation]  add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws  bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer:  ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on  self._get_vip_port(loadbalancer).id   \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10]  sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack  (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure -  Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image  Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list  and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the  deleted EgressIP object got  InvalidEgressIP  warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD:  prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) -  no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server  makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN  Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain  olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking -  networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR  is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to  attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nInstallation note:\n\nSafari 13.0.3 may be obtained from the Mac App Store. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1207179 - Select items matching non existing pattern does not unselect already selected\n1566027 - can\u0027t correctly compute contents size if hidden files are included\n1569868 - Browsing samba shares using gvfs is very slow\n1652178 - [RFE] perf-tool run on wayland\n1656262 - The terminal\u0027s character display is unclear on rhel8 guest after installing gnome\n1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled\n1692536 - login screen shows after gnome-initial-setup\n1706008 - Sound Effect sometimes fails to change to selected option. \n1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. \n1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined\n1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly\n1758891 - tracker-devel subpackage missing from el8 repos\n1775345 - Rebase xdg-desktop-portal to 1.6\n1778579 - Nautilus does not respect umask settings. \n1779691 - Rebase xdg-desktop-portal-gtk to 1.6\n1794045 - There are two different high contrast versions of desktop icons\n1804719 - Update vte291 to 0.52.4\n1805929 - RHEL 8.1 gnome-shell-extension errors\n1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp\n1814820 - No checkbox to install updates in the shutdown dialog\n1816070 - \"search for an application to open this file\" dialog broken\n1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution\n1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1817143 - Rebase WebKitGTK to 2.28\n1820759 - Include IO stall fixes\n1820760 - Include IO fixes\n1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening\n1827030 - gnome-settings-daemon: subscription notification on CentOS Stream\n1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content\n1832347 - [Rebase] Rebase pipewire to 0.3.x\n1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install\n1837381 - Backport screen cast improvements to 8.3\n1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version\n1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6\n1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113\n1840080 - Can not control top bar menus via keys in Wayland\n1840788 - [flatpak][rhel8] unable to build potrace as dependency\n1843486 - Software crash after clicking Updates tab\n1844578 - anaconda very rarely crashes at startup with a pygobject traceback\n1846191 - usb adapters hotplug crashes gnome-shell\n1847051 - JS ERROR: TypeError: area is null\n1847061 - File search doesn\u0027t work under certain locales\n1847062 - gnome-remote-desktop crash on QXL graphics\n1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV\n1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow\n1854734 - PipeWire 0.2 should be required by xdg-desktop-portal\n1866332 - Remove obsolete libusb-devel dependency\n1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at \"Started GNOME Display Manager\" - GDM regression issue",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8808",
        "trust": 2.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155069",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4013",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0382",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155216",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-14248",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-160243",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8808",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "155059",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "id": "VAR-201912-1851",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:27:26.859000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=106072"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210721"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210723"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210724"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210725"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210726"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://wpewebkit.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://webkitgtk.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210725"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155069/apple-security-advisory-2019-10-29-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0382"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155216/webkitgtk-wpe-webkit-code-execution-xss.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210723"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4013/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4233/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-30746"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.2,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.2,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/fulldisclosure/2019/oct/50"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8787"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8794"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8798"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8797"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8803"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8795"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8785"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/site/solutions/537113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2019-11-01T17:11:43",
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2019-11-01T17:06:21",
        "db": "PACKETSTORM",
        "id": "155059"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-11-04T15:24:00",
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "date": "2019-10-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      },
      {
        "date": "2019-12-18T18:15:43.460000",
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160243"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8808"
      },
      {
        "date": "2022-03-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      },
      {
        "date": "2024-11-21T04:50:30.863000",
        "db": "NVD",
        "id": "CVE-2019-8808"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple Apple Product Buffer Error Vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1777"
      }
    ],
    "trust": 0.6
  }
}

VAR-202010-1511

Vulnerability from variot - Updated: 2025-12-22 23:22

A use after free issue was addressed with improved memory management. This issue is fixed in Safari 14.0. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari is a web browser of Apple (Apple), the default browser included with Mac OS X and iOS operating systems. A resource management error vulnerability exists in Apple Safari. The vulnerability originates from the aboutBlankURL() function of the WebKit component in Apple Safari. Bugs fixed (https://bugzilla.redhat.com/):

1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1945703 - "Guest OS Info" availability in VMI describe is flaky 1958816 - [2.6.z] KubeMacPool fails to start due to OOM likely caused by a high number of Pods running in the cluster 1963275 - migration controller null pointer dereference 1965099 - Live Migration double handoff to virt-handler causes connection failures 1965181 - CDI importer doesn't report AwaitingVDDK like it used to 1967086 - Cloning DataVolumes between namespaces fails while creating cdi-upload pod 1967887 - [2.6.6] nmstate is not progressing on a node and not configuring vlan filtering that causes an outage for VMs 1969756 - Windows VMs fail to start on air-gapped environments 1970372 - Virt-handler fails to verify container-disk 1973227 - segfault in virt-controller during pdb deletion 1974084 - 2.6.6 containers 1975212 - No Virtual Machine Templates Found [EDIT - all templates are marked as depracted] 1975727 - [Regression][VMIO][Warm] The third precopy does not end in warm migration 1977756 - [2.6.z] PVC keeps in pending when using hostpath-provisioner 1982760 - [v2v] no kind VirtualMachine is registered for version \"kubevirt.io/v1\" i... 1986989 - OpenShift Virtualization 2.6.z cannot be upgraded to 4.8.0 initially deployed starting with <= 4.8

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: GNOME security, bug fix, and enhancement update Advisory ID: RHSA-2021:1586-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:1586 Issue date: 2021-05-18 CVE Names: CVE-2019-13012 CVE-2020-9948 CVE-2020-9951 CVE-2020-9983 CVE-2020-13543 CVE-2020-13584 ==================================================================== 1. Summary:

An update for GNOME is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat CodeReady Linux Builder (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

GNOME is the default desktop environment of Red Hat Enterprise Linux.

The following packages have been upgraded to a later upstream version: accountsservice (0.6.55), webkit2gtk3 (2.30.4).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.4 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

GDM must be restarted for this update to take effect.

  1. Bugs fixed (https://bugzilla.redhat.com/):

837035 - Shortcuts -- alfanumeric vs numpad 1152037 - RFE: use virtio-scsi disk bus with discard='unmap' for guests that support it 1464902 - Crash in dls_async_task_complete 1671761 - Adding new workspaces is broken in gnome session under wayland 1700002 - adding several printers is stalling the printer plugin in GSD 1705392 - Changing screen resolution while recording screen will break the video. 1728632 - CVE-2019-13012 glib2: insecure permissions for files and directories 1728896 - glib2: 'keyfile' backend for gsettings not loaded 1765627 - Can't install both gnome-online-accounts-devel.i686 and gnome-online-accounts-devel.x86_64 on RHEL 8.1 1786496 - gnome-shell killed by SIGABRT in g_assertion_message_expr.cold.16() 1796916 - Notification appears with incorrect "system not registered - register to get updates" message on RHEL8.2 when locale is non-English 1802105 - rpm based extensions in RHEL8 should not receive updates from extensions.gnome.org 1833787 - Unable to disable onscreen keyboard in touch screen machine 1842229 - double-touch desktop icons fails sometimes 1845660 - JS WARNING from gnome-shell [MetaWindowX11] 1846376 - rebase accountsservice to latest release 1854290 - Physical device fails to wakeup via org.gnome.ScreenSaver D-Bus API 1860946 - gnome-shell logs AuthList property defined with 'let' or 'const' 1861357 - Login shows Exclamation Sign with no message for Caps Lock on 1861769 - Authentication fails when Wayland is enabled along with polyinstantiation of /tmp 1865718 - Right click menu is not translated into Japanese when desktop-icons extension is enabled 1870837 - gnome control-center and settings-daemon don't handle systems that are registered but have no attached entitlements properly 1871041 - on screen keyboard (OSK) does not disappear completely, part of OSK remains on the screen 1876291 - [ALL LANG] Unlocalized strings in About -> Register System. 1881312 - [Bug] gnome-shell errors in syslog 1883304 - Rebase to WebKitGTK 2.30 1883868 - [RFE] Dump JS stack trace by default when gnome-shell crashes 1886822 - License differs from actual 1888407 - Flatpak updates and removals get confused if same ref occurs in multiple remotes 1889411 - self-signed cert in owncloud: HTTP Error: Unacceptable TLS certificate 1889528 - [8.4] Right GLX stereo texture is potentially leaked for each closed window 1901212 - CVE-2020-13584 webkitgtk: use-after-free may lead to arbitrary code execution 1901214 - CVE-2020-9948 webkitgtk: type confusion may lead to arbitrary code execution 1901216 - CVE-2020-9951 webkitgtk: use-after-free may lead to arbitrary code execution 1901221 - CVE-2020-9983 webkitgtk: out-of-bounds write may lead to code execution 1903043 - gnome-control-center SEGFAULT at ../panels/printers/pp-printer-entry.c:280 1903568 - CVE-2020-13543 webkitgtk: use-after-free may lead to arbitrary code execution 1906499 - Nautilus creates invalid bookmarks for Samba shares 1918391 - gdm isn't killing the login screen on login after all 1919429 - Ship libdazzle-devel in CRB 1919432 - Ship libepubgen-devel in CRB 1919435 - Ship woff2-devel in CRB 1919467 - Mutter: mouse click doesn't work when using 10-bit graphic monitor 1921151 - [nvidia Ampere] stutters when using nouveau with llvmpipe

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: OpenEXR-2.2.0-12.el8.src.rpm accountsservice-0.6.55-1.el8.src.rpm atkmm-2.24.2-7.el8.src.rpm cairomm-1.12.0-8.el8.src.rpm chrome-gnome-shell-10.1-7.el8.src.rpm dleyna-core-0.6.0-3.el8.src.rpm dleyna-server-0.6.0-3.el8.src.rpm enchant2-2.2.3-3.el8.src.rpm gdm-3.28.3-39.el8.src.rpm geoclue2-2.5.5-2.el8.src.rpm geocode-glib-3.26.0-3.el8.src.rpm gjs-1.56.2-5.el8.src.rpm glibmm24-2.56.0-2.el8.src.rpm gnome-boxes-3.36.5-8.el8.src.rpm gnome-control-center-3.28.2-27.el8.src.rpm gnome-online-accounts-3.28.2-2.el8.src.rpm gnome-photos-3.28.1-4.el8.src.rpm gnome-settings-daemon-3.32.0-14.el8.src.rpm gnome-shell-3.32.2-30.el8.src.rpm gnome-shell-extensions-3.32.1-14.el8.src.rpm gnome-software-3.36.1-5.el8.src.rpm gnome-terminal-3.28.3-3.el8.src.rpm gtk2-2.24.32-5.el8.src.rpm gtkmm24-2.24.5-6.el8.src.rpm gtkmm30-3.22.2-3.el8.src.rpm gvfs-1.36.2-11.el8.src.rpm libdazzle-3.28.5-2.el8.src.rpm libepubgen-0.1.0-3.el8.src.rpm libsigc++20-2.10.0-6.el8.src.rpm libvisual-0.4.0-25.el8.src.rpm mutter-3.32.2-57.el8.src.rpm nautilus-3.28.1-15.el8.src.rpm pangomm-2.40.1-6.el8.src.rpm soundtouch-2.0.0-3.el8.src.rpm webkit2gtk3-2.30.4-1.el8.src.rpm woff2-1.0.2-5.el8.src.rpm

aarch64: OpenEXR-debuginfo-2.2.0-12.el8.aarch64.rpm OpenEXR-debugsource-2.2.0-12.el8.aarch64.rpm OpenEXR-libs-2.2.0-12.el8.aarch64.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.aarch64.rpm accountsservice-0.6.55-1.el8.aarch64.rpm accountsservice-debuginfo-0.6.55-1.el8.aarch64.rpm accountsservice-debugsource-0.6.55-1.el8.aarch64.rpm accountsservice-libs-0.6.55-1.el8.aarch64.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.aarch64.rpm atkmm-2.24.2-7.el8.aarch64.rpm atkmm-debuginfo-2.24.2-7.el8.aarch64.rpm atkmm-debugsource-2.24.2-7.el8.aarch64.rpm cairomm-1.12.0-8.el8.aarch64.rpm cairomm-debuginfo-1.12.0-8.el8.aarch64.rpm cairomm-debugsource-1.12.0-8.el8.aarch64.rpm chrome-gnome-shell-10.1-7.el8.aarch64.rpm enchant2-2.2.3-3.el8.aarch64.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.aarch64.rpm enchant2-debuginfo-2.2.3-3.el8.aarch64.rpm enchant2-debugsource-2.2.3-3.el8.aarch64.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.aarch64.rpm gdm-3.28.3-39.el8.aarch64.rpm gdm-debuginfo-3.28.3-39.el8.aarch64.rpm gdm-debugsource-3.28.3-39.el8.aarch64.rpm geoclue2-2.5.5-2.el8.aarch64.rpm geoclue2-debuginfo-2.5.5-2.el8.aarch64.rpm geoclue2-debugsource-2.5.5-2.el8.aarch64.rpm geoclue2-demos-2.5.5-2.el8.aarch64.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.aarch64.rpm geoclue2-libs-2.5.5-2.el8.aarch64.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.aarch64.rpm geocode-glib-3.26.0-3.el8.aarch64.rpm geocode-glib-debuginfo-3.26.0-3.el8.aarch64.rpm geocode-glib-debugsource-3.26.0-3.el8.aarch64.rpm geocode-glib-devel-3.26.0-3.el8.aarch64.rpm gjs-1.56.2-5.el8.aarch64.rpm gjs-debuginfo-1.56.2-5.el8.aarch64.rpm gjs-debugsource-1.56.2-5.el8.aarch64.rpm gjs-tests-debuginfo-1.56.2-5.el8.aarch64.rpm glibmm24-2.56.0-2.el8.aarch64.rpm glibmm24-debuginfo-2.56.0-2.el8.aarch64.rpm glibmm24-debugsource-2.56.0-2.el8.aarch64.rpm gnome-control-center-3.28.2-27.el8.aarch64.rpm gnome-control-center-debuginfo-3.28.2-27.el8.aarch64.rpm gnome-control-center-debugsource-3.28.2-27.el8.aarch64.rpm gnome-online-accounts-3.28.2-2.el8.aarch64.rpm gnome-online-accounts-debuginfo-3.28.2-2.el8.aarch64.rpm gnome-online-accounts-debugsource-3.28.2-2.el8.aarch64.rpm gnome-online-accounts-devel-3.28.2-2.el8.aarch64.rpm gnome-settings-daemon-3.32.0-14.el8.aarch64.rpm gnome-settings-daemon-debuginfo-3.32.0-14.el8.aarch64.rpm gnome-settings-daemon-debugsource-3.32.0-14.el8.aarch64.rpm gnome-shell-3.32.2-30.el8.aarch64.rpm gnome-shell-debuginfo-3.32.2-30.el8.aarch64.rpm gnome-shell-debugsource-3.32.2-30.el8.aarch64.rpm gnome-software-3.36.1-5.el8.aarch64.rpm gnome-software-debuginfo-3.36.1-5.el8.aarch64.rpm gnome-software-debugsource-3.36.1-5.el8.aarch64.rpm gnome-terminal-3.28.3-3.el8.aarch64.rpm gnome-terminal-debuginfo-3.28.3-3.el8.aarch64.rpm gnome-terminal-debugsource-3.28.3-3.el8.aarch64.rpm gnome-terminal-nautilus-3.28.3-3.el8.aarch64.rpm gnome-terminal-nautilus-debuginfo-3.28.3-3.el8.aarch64.rpm gtk2-2.24.32-5.el8.aarch64.rpm gtk2-debuginfo-2.24.32-5.el8.aarch64.rpm gtk2-debugsource-2.24.32-5.el8.aarch64.rpm gtk2-devel-2.24.32-5.el8.aarch64.rpm gtk2-devel-debuginfo-2.24.32-5.el8.aarch64.rpm gtk2-devel-docs-2.24.32-5.el8.aarch64.rpm gtk2-immodule-xim-2.24.32-5.el8.aarch64.rpm gtk2-immodule-xim-debuginfo-2.24.32-5.el8.aarch64.rpm gtk2-immodules-2.24.32-5.el8.aarch64.rpm gtk2-immodules-debuginfo-2.24.32-5.el8.aarch64.rpm gtkmm24-2.24.5-6.el8.aarch64.rpm gtkmm24-debuginfo-2.24.5-6.el8.aarch64.rpm gtkmm24-debugsource-2.24.5-6.el8.aarch64.rpm gtkmm30-3.22.2-3.el8.aarch64.rpm gtkmm30-debuginfo-3.22.2-3.el8.aarch64.rpm gtkmm30-debugsource-3.22.2-3.el8.aarch64.rpm gvfs-1.36.2-11.el8.aarch64.rpm gvfs-afc-1.36.2-11.el8.aarch64.rpm gvfs-afc-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-afp-1.36.2-11.el8.aarch64.rpm gvfs-afp-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-archive-1.36.2-11.el8.aarch64.rpm gvfs-archive-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-client-1.36.2-11.el8.aarch64.rpm gvfs-client-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-debugsource-1.36.2-11.el8.aarch64.rpm gvfs-devel-1.36.2-11.el8.aarch64.rpm gvfs-fuse-1.36.2-11.el8.aarch64.rpm gvfs-fuse-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-goa-1.36.2-11.el8.aarch64.rpm gvfs-goa-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-gphoto2-1.36.2-11.el8.aarch64.rpm gvfs-gphoto2-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-mtp-1.36.2-11.el8.aarch64.rpm gvfs-mtp-debuginfo-1.36.2-11.el8.aarch64.rpm gvfs-smb-1.36.2-11.el8.aarch64.rpm gvfs-smb-debuginfo-1.36.2-11.el8.aarch64.rpm libsigc++20-2.10.0-6.el8.aarch64.rpm libsigc++20-debuginfo-2.10.0-6.el8.aarch64.rpm libsigc++20-debugsource-2.10.0-6.el8.aarch64.rpm libvisual-0.4.0-25.el8.aarch64.rpm libvisual-debuginfo-0.4.0-25.el8.aarch64.rpm libvisual-debugsource-0.4.0-25.el8.aarch64.rpm mutter-3.32.2-57.el8.aarch64.rpm mutter-debuginfo-3.32.2-57.el8.aarch64.rpm mutter-debugsource-3.32.2-57.el8.aarch64.rpm mutter-tests-debuginfo-3.32.2-57.el8.aarch64.rpm nautilus-3.28.1-15.el8.aarch64.rpm nautilus-debuginfo-3.28.1-15.el8.aarch64.rpm nautilus-debugsource-3.28.1-15.el8.aarch64.rpm nautilus-extensions-3.28.1-15.el8.aarch64.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.aarch64.rpm pangomm-2.40.1-6.el8.aarch64.rpm pangomm-debuginfo-2.40.1-6.el8.aarch64.rpm pangomm-debugsource-2.40.1-6.el8.aarch64.rpm soundtouch-2.0.0-3.el8.aarch64.rpm soundtouch-debuginfo-2.0.0-3.el8.aarch64.rpm soundtouch-debugsource-2.0.0-3.el8.aarch64.rpm webkit2gtk3-2.30.4-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.30.4-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.30.4-1.el8.aarch64.rpm webkit2gtk3-devel-2.30.4-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.30.4-1.el8.aarch64.rpm webkit2gtk3-jsc-2.30.4-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.30.4-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.30.4-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.aarch64.rpm woff2-1.0.2-5.el8.aarch64.rpm woff2-debuginfo-1.0.2-5.el8.aarch64.rpm woff2-debugsource-1.0.2-5.el8.aarch64.rpm

noarch: gnome-classic-session-3.32.1-14.el8.noarch.rpm gnome-control-center-filesystem-3.28.2-27.el8.noarch.rpm gnome-shell-extension-apps-menu-3.32.1-14.el8.noarch.rpm gnome-shell-extension-auto-move-windows-3.32.1-14.el8.noarch.rpm gnome-shell-extension-common-3.32.1-14.el8.noarch.rpm gnome-shell-extension-dash-to-dock-3.32.1-14.el8.noarch.rpm gnome-shell-extension-desktop-icons-3.32.1-14.el8.noarch.rpm gnome-shell-extension-disable-screenshield-3.32.1-14.el8.noarch.rpm gnome-shell-extension-drive-menu-3.32.1-14.el8.noarch.rpm gnome-shell-extension-horizontal-workspaces-3.32.1-14.el8.noarch.rpm gnome-shell-extension-launch-new-instance-3.32.1-14.el8.noarch.rpm gnome-shell-extension-native-window-placement-3.32.1-14.el8.noarch.rpm gnome-shell-extension-no-hot-corner-3.32.1-14.el8.noarch.rpm gnome-shell-extension-panel-favorites-3.32.1-14.el8.noarch.rpm gnome-shell-extension-places-menu-3.32.1-14.el8.noarch.rpm gnome-shell-extension-screenshot-window-sizer-3.32.1-14.el8.noarch.rpm gnome-shell-extension-systemMonitor-3.32.1-14.el8.noarch.rpm gnome-shell-extension-top-icons-3.32.1-14.el8.noarch.rpm gnome-shell-extension-updates-dialog-3.32.1-14.el8.noarch.rpm gnome-shell-extension-user-theme-3.32.1-14.el8.noarch.rpm gnome-shell-extension-window-grouper-3.32.1-14.el8.noarch.rpm gnome-shell-extension-window-list-3.32.1-14.el8.noarch.rpm gnome-shell-extension-windowsNavigator-3.32.1-14.el8.noarch.rpm gnome-shell-extension-workspace-indicator-3.32.1-14.el8.noarch.rpm

ppc64le: OpenEXR-debuginfo-2.2.0-12.el8.ppc64le.rpm OpenEXR-debugsource-2.2.0-12.el8.ppc64le.rpm OpenEXR-libs-2.2.0-12.el8.ppc64le.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.ppc64le.rpm accountsservice-0.6.55-1.el8.ppc64le.rpm accountsservice-debuginfo-0.6.55-1.el8.ppc64le.rpm accountsservice-debugsource-0.6.55-1.el8.ppc64le.rpm accountsservice-libs-0.6.55-1.el8.ppc64le.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.ppc64le.rpm atkmm-2.24.2-7.el8.ppc64le.rpm atkmm-debuginfo-2.24.2-7.el8.ppc64le.rpm atkmm-debugsource-2.24.2-7.el8.ppc64le.rpm cairomm-1.12.0-8.el8.ppc64le.rpm cairomm-debuginfo-1.12.0-8.el8.ppc64le.rpm cairomm-debugsource-1.12.0-8.el8.ppc64le.rpm chrome-gnome-shell-10.1-7.el8.ppc64le.rpm dleyna-core-0.6.0-3.el8.ppc64le.rpm dleyna-core-debuginfo-0.6.0-3.el8.ppc64le.rpm dleyna-core-debugsource-0.6.0-3.el8.ppc64le.rpm dleyna-server-0.6.0-3.el8.ppc64le.rpm dleyna-server-debuginfo-0.6.0-3.el8.ppc64le.rpm dleyna-server-debugsource-0.6.0-3.el8.ppc64le.rpm enchant2-2.2.3-3.el8.ppc64le.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.ppc64le.rpm enchant2-debuginfo-2.2.3-3.el8.ppc64le.rpm enchant2-debugsource-2.2.3-3.el8.ppc64le.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.ppc64le.rpm gdm-3.28.3-39.el8.ppc64le.rpm gdm-debuginfo-3.28.3-39.el8.ppc64le.rpm gdm-debugsource-3.28.3-39.el8.ppc64le.rpm geoclue2-2.5.5-2.el8.ppc64le.rpm geoclue2-debuginfo-2.5.5-2.el8.ppc64le.rpm geoclue2-debugsource-2.5.5-2.el8.ppc64le.rpm geoclue2-demos-2.5.5-2.el8.ppc64le.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.ppc64le.rpm geoclue2-libs-2.5.5-2.el8.ppc64le.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.ppc64le.rpm geocode-glib-3.26.0-3.el8.ppc64le.rpm geocode-glib-debuginfo-3.26.0-3.el8.ppc64le.rpm geocode-glib-debugsource-3.26.0-3.el8.ppc64le.rpm geocode-glib-devel-3.26.0-3.el8.ppc64le.rpm gjs-1.56.2-5.el8.ppc64le.rpm gjs-debuginfo-1.56.2-5.el8.ppc64le.rpm gjs-debugsource-1.56.2-5.el8.ppc64le.rpm gjs-tests-debuginfo-1.56.2-5.el8.ppc64le.rpm glibmm24-2.56.0-2.el8.ppc64le.rpm glibmm24-debuginfo-2.56.0-2.el8.ppc64le.rpm glibmm24-debugsource-2.56.0-2.el8.ppc64le.rpm gnome-control-center-3.28.2-27.el8.ppc64le.rpm gnome-control-center-debuginfo-3.28.2-27.el8.ppc64le.rpm gnome-control-center-debugsource-3.28.2-27.el8.ppc64le.rpm gnome-online-accounts-3.28.2-2.el8.ppc64le.rpm gnome-online-accounts-debuginfo-3.28.2-2.el8.ppc64le.rpm gnome-online-accounts-debugsource-3.28.2-2.el8.ppc64le.rpm gnome-online-accounts-devel-3.28.2-2.el8.ppc64le.rpm gnome-photos-3.28.1-4.el8.ppc64le.rpm gnome-photos-debuginfo-3.28.1-4.el8.ppc64le.rpm gnome-photos-debugsource-3.28.1-4.el8.ppc64le.rpm gnome-photos-tests-3.28.1-4.el8.ppc64le.rpm gnome-settings-daemon-3.32.0-14.el8.ppc64le.rpm gnome-settings-daemon-debuginfo-3.32.0-14.el8.ppc64le.rpm gnome-settings-daemon-debugsource-3.32.0-14.el8.ppc64le.rpm gnome-shell-3.32.2-30.el8.ppc64le.rpm gnome-shell-debuginfo-3.32.2-30.el8.ppc64le.rpm gnome-shell-debugsource-3.32.2-30.el8.ppc64le.rpm gnome-software-3.36.1-5.el8.ppc64le.rpm gnome-software-debuginfo-3.36.1-5.el8.ppc64le.rpm gnome-software-debugsource-3.36.1-5.el8.ppc64le.rpm gnome-terminal-3.28.3-3.el8.ppc64le.rpm gnome-terminal-debuginfo-3.28.3-3.el8.ppc64le.rpm gnome-terminal-debugsource-3.28.3-3.el8.ppc64le.rpm gnome-terminal-nautilus-3.28.3-3.el8.ppc64le.rpm gnome-terminal-nautilus-debuginfo-3.28.3-3.el8.ppc64le.rpm gtk2-2.24.32-5.el8.ppc64le.rpm gtk2-debuginfo-2.24.32-5.el8.ppc64le.rpm gtk2-debugsource-2.24.32-5.el8.ppc64le.rpm gtk2-devel-2.24.32-5.el8.ppc64le.rpm gtk2-devel-debuginfo-2.24.32-5.el8.ppc64le.rpm gtk2-devel-docs-2.24.32-5.el8.ppc64le.rpm gtk2-immodule-xim-2.24.32-5.el8.ppc64le.rpm gtk2-immodule-xim-debuginfo-2.24.32-5.el8.ppc64le.rpm gtk2-immodules-2.24.32-5.el8.ppc64le.rpm gtk2-immodules-debuginfo-2.24.32-5.el8.ppc64le.rpm gtkmm24-2.24.5-6.el8.ppc64le.rpm gtkmm24-debuginfo-2.24.5-6.el8.ppc64le.rpm gtkmm24-debugsource-2.24.5-6.el8.ppc64le.rpm gtkmm30-3.22.2-3.el8.ppc64le.rpm gtkmm30-debuginfo-3.22.2-3.el8.ppc64le.rpm gtkmm30-debugsource-3.22.2-3.el8.ppc64le.rpm gvfs-1.36.2-11.el8.ppc64le.rpm gvfs-afc-1.36.2-11.el8.ppc64le.rpm gvfs-afc-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-afp-1.36.2-11.el8.ppc64le.rpm gvfs-afp-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-archive-1.36.2-11.el8.ppc64le.rpm gvfs-archive-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-client-1.36.2-11.el8.ppc64le.rpm gvfs-client-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-debugsource-1.36.2-11.el8.ppc64le.rpm gvfs-devel-1.36.2-11.el8.ppc64le.rpm gvfs-fuse-1.36.2-11.el8.ppc64le.rpm gvfs-fuse-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-goa-1.36.2-11.el8.ppc64le.rpm gvfs-goa-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-gphoto2-1.36.2-11.el8.ppc64le.rpm gvfs-gphoto2-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-mtp-1.36.2-11.el8.ppc64le.rpm gvfs-mtp-debuginfo-1.36.2-11.el8.ppc64le.rpm gvfs-smb-1.36.2-11.el8.ppc64le.rpm gvfs-smb-debuginfo-1.36.2-11.el8.ppc64le.rpm libdazzle-3.28.5-2.el8.ppc64le.rpm libdazzle-debuginfo-3.28.5-2.el8.ppc64le.rpm libdazzle-debugsource-3.28.5-2.el8.ppc64le.rpm libepubgen-0.1.0-3.el8.ppc64le.rpm libepubgen-debuginfo-0.1.0-3.el8.ppc64le.rpm libepubgen-debugsource-0.1.0-3.el8.ppc64le.rpm libsigc++20-2.10.0-6.el8.ppc64le.rpm libsigc++20-debuginfo-2.10.0-6.el8.ppc64le.rpm libsigc++20-debugsource-2.10.0-6.el8.ppc64le.rpm libvisual-0.4.0-25.el8.ppc64le.rpm libvisual-debuginfo-0.4.0-25.el8.ppc64le.rpm libvisual-debugsource-0.4.0-25.el8.ppc64le.rpm mutter-3.32.2-57.el8.ppc64le.rpm mutter-debuginfo-3.32.2-57.el8.ppc64le.rpm mutter-debugsource-3.32.2-57.el8.ppc64le.rpm mutter-tests-debuginfo-3.32.2-57.el8.ppc64le.rpm nautilus-3.28.1-15.el8.ppc64le.rpm nautilus-debuginfo-3.28.1-15.el8.ppc64le.rpm nautilus-debugsource-3.28.1-15.el8.ppc64le.rpm nautilus-extensions-3.28.1-15.el8.ppc64le.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.ppc64le.rpm pangomm-2.40.1-6.el8.ppc64le.rpm pangomm-debuginfo-2.40.1-6.el8.ppc64le.rpm pangomm-debugsource-2.40.1-6.el8.ppc64le.rpm soundtouch-2.0.0-3.el8.ppc64le.rpm soundtouch-debuginfo-2.0.0-3.el8.ppc64le.rpm soundtouch-debugsource-2.0.0-3.el8.ppc64le.rpm webkit2gtk3-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-devel-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.30.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.ppc64le.rpm woff2-1.0.2-5.el8.ppc64le.rpm woff2-debuginfo-1.0.2-5.el8.ppc64le.rpm woff2-debugsource-1.0.2-5.el8.ppc64le.rpm

s390x: OpenEXR-debuginfo-2.2.0-12.el8.s390x.rpm OpenEXR-debugsource-2.2.0-12.el8.s390x.rpm OpenEXR-libs-2.2.0-12.el8.s390x.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.s390x.rpm accountsservice-0.6.55-1.el8.s390x.rpm accountsservice-debuginfo-0.6.55-1.el8.s390x.rpm accountsservice-debugsource-0.6.55-1.el8.s390x.rpm accountsservice-libs-0.6.55-1.el8.s390x.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.s390x.rpm atkmm-2.24.2-7.el8.s390x.rpm atkmm-debuginfo-2.24.2-7.el8.s390x.rpm atkmm-debugsource-2.24.2-7.el8.s390x.rpm cairomm-1.12.0-8.el8.s390x.rpm cairomm-debuginfo-1.12.0-8.el8.s390x.rpm cairomm-debugsource-1.12.0-8.el8.s390x.rpm chrome-gnome-shell-10.1-7.el8.s390x.rpm enchant2-2.2.3-3.el8.s390x.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.s390x.rpm enchant2-debuginfo-2.2.3-3.el8.s390x.rpm enchant2-debugsource-2.2.3-3.el8.s390x.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.s390x.rpm gdm-3.28.3-39.el8.s390x.rpm gdm-debuginfo-3.28.3-39.el8.s390x.rpm gdm-debugsource-3.28.3-39.el8.s390x.rpm geoclue2-2.5.5-2.el8.s390x.rpm geoclue2-debuginfo-2.5.5-2.el8.s390x.rpm geoclue2-debugsource-2.5.5-2.el8.s390x.rpm geoclue2-demos-2.5.5-2.el8.s390x.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.s390x.rpm geoclue2-libs-2.5.5-2.el8.s390x.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.s390x.rpm geocode-glib-3.26.0-3.el8.s390x.rpm geocode-glib-debuginfo-3.26.0-3.el8.s390x.rpm geocode-glib-debugsource-3.26.0-3.el8.s390x.rpm geocode-glib-devel-3.26.0-3.el8.s390x.rpm gjs-1.56.2-5.el8.s390x.rpm gjs-debuginfo-1.56.2-5.el8.s390x.rpm gjs-debugsource-1.56.2-5.el8.s390x.rpm gjs-tests-debuginfo-1.56.2-5.el8.s390x.rpm glibmm24-2.56.0-2.el8.s390x.rpm glibmm24-debuginfo-2.56.0-2.el8.s390x.rpm glibmm24-debugsource-2.56.0-2.el8.s390x.rpm gnome-control-center-3.28.2-27.el8.s390x.rpm gnome-control-center-debuginfo-3.28.2-27.el8.s390x.rpm gnome-control-center-debugsource-3.28.2-27.el8.s390x.rpm gnome-online-accounts-3.28.2-2.el8.s390x.rpm gnome-online-accounts-debuginfo-3.28.2-2.el8.s390x.rpm gnome-online-accounts-debugsource-3.28.2-2.el8.s390x.rpm gnome-online-accounts-devel-3.28.2-2.el8.s390x.rpm gnome-settings-daemon-3.32.0-14.el8.s390x.rpm gnome-settings-daemon-debuginfo-3.32.0-14.el8.s390x.rpm gnome-settings-daemon-debugsource-3.32.0-14.el8.s390x.rpm gnome-shell-3.32.2-30.el8.s390x.rpm gnome-shell-debuginfo-3.32.2-30.el8.s390x.rpm gnome-shell-debugsource-3.32.2-30.el8.s390x.rpm gnome-software-3.36.1-5.el8.s390x.rpm gnome-software-debuginfo-3.36.1-5.el8.s390x.rpm gnome-software-debugsource-3.36.1-5.el8.s390x.rpm gnome-terminal-3.28.3-3.el8.s390x.rpm gnome-terminal-debuginfo-3.28.3-3.el8.s390x.rpm gnome-terminal-debugsource-3.28.3-3.el8.s390x.rpm gnome-terminal-nautilus-3.28.3-3.el8.s390x.rpm gnome-terminal-nautilus-debuginfo-3.28.3-3.el8.s390x.rpm gtk2-2.24.32-5.el8.s390x.rpm gtk2-debuginfo-2.24.32-5.el8.s390x.rpm gtk2-debugsource-2.24.32-5.el8.s390x.rpm gtk2-devel-2.24.32-5.el8.s390x.rpm gtk2-devel-debuginfo-2.24.32-5.el8.s390x.rpm gtk2-devel-docs-2.24.32-5.el8.s390x.rpm gtk2-immodule-xim-2.24.32-5.el8.s390x.rpm gtk2-immodule-xim-debuginfo-2.24.32-5.el8.s390x.rpm gtk2-immodules-2.24.32-5.el8.s390x.rpm gtk2-immodules-debuginfo-2.24.32-5.el8.s390x.rpm gtkmm24-2.24.5-6.el8.s390x.rpm gtkmm24-debuginfo-2.24.5-6.el8.s390x.rpm gtkmm24-debugsource-2.24.5-6.el8.s390x.rpm gtkmm30-3.22.2-3.el8.s390x.rpm gtkmm30-debuginfo-3.22.2-3.el8.s390x.rpm gtkmm30-debugsource-3.22.2-3.el8.s390x.rpm gvfs-1.36.2-11.el8.s390x.rpm gvfs-afp-1.36.2-11.el8.s390x.rpm gvfs-afp-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-archive-1.36.2-11.el8.s390x.rpm gvfs-archive-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-client-1.36.2-11.el8.s390x.rpm gvfs-client-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-debugsource-1.36.2-11.el8.s390x.rpm gvfs-devel-1.36.2-11.el8.s390x.rpm gvfs-fuse-1.36.2-11.el8.s390x.rpm gvfs-fuse-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-goa-1.36.2-11.el8.s390x.rpm gvfs-goa-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-gphoto2-1.36.2-11.el8.s390x.rpm gvfs-gphoto2-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-mtp-1.36.2-11.el8.s390x.rpm gvfs-mtp-debuginfo-1.36.2-11.el8.s390x.rpm gvfs-smb-1.36.2-11.el8.s390x.rpm gvfs-smb-debuginfo-1.36.2-11.el8.s390x.rpm libsigc++20-2.10.0-6.el8.s390x.rpm libsigc++20-debuginfo-2.10.0-6.el8.s390x.rpm libsigc++20-debugsource-2.10.0-6.el8.s390x.rpm libvisual-0.4.0-25.el8.s390x.rpm libvisual-debuginfo-0.4.0-25.el8.s390x.rpm libvisual-debugsource-0.4.0-25.el8.s390x.rpm mutter-3.32.2-57.el8.s390x.rpm mutter-debuginfo-3.32.2-57.el8.s390x.rpm mutter-debugsource-3.32.2-57.el8.s390x.rpm mutter-tests-debuginfo-3.32.2-57.el8.s390x.rpm nautilus-3.28.1-15.el8.s390x.rpm nautilus-debuginfo-3.28.1-15.el8.s390x.rpm nautilus-debugsource-3.28.1-15.el8.s390x.rpm nautilus-extensions-3.28.1-15.el8.s390x.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.s390x.rpm pangomm-2.40.1-6.el8.s390x.rpm pangomm-debuginfo-2.40.1-6.el8.s390x.rpm pangomm-debugsource-2.40.1-6.el8.s390x.rpm soundtouch-2.0.0-3.el8.s390x.rpm soundtouch-debuginfo-2.0.0-3.el8.s390x.rpm soundtouch-debugsource-2.0.0-3.el8.s390x.rpm webkit2gtk3-2.30.4-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.30.4-1.el8.s390x.rpm webkit2gtk3-debugsource-2.30.4-1.el8.s390x.rpm webkit2gtk3-devel-2.30.4-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.30.4-1.el8.s390x.rpm webkit2gtk3-jsc-2.30.4-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.30.4-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.30.4-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.s390x.rpm woff2-1.0.2-5.el8.s390x.rpm woff2-debuginfo-1.0.2-5.el8.s390x.rpm woff2-debugsource-1.0.2-5.el8.s390x.rpm

x86_64: OpenEXR-debuginfo-2.2.0-12.el8.i686.rpm OpenEXR-debuginfo-2.2.0-12.el8.x86_64.rpm OpenEXR-debugsource-2.2.0-12.el8.i686.rpm OpenEXR-debugsource-2.2.0-12.el8.x86_64.rpm OpenEXR-libs-2.2.0-12.el8.i686.rpm OpenEXR-libs-2.2.0-12.el8.x86_64.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.i686.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.x86_64.rpm accountsservice-0.6.55-1.el8.x86_64.rpm accountsservice-debuginfo-0.6.55-1.el8.i686.rpm accountsservice-debuginfo-0.6.55-1.el8.x86_64.rpm accountsservice-debugsource-0.6.55-1.el8.i686.rpm accountsservice-debugsource-0.6.55-1.el8.x86_64.rpm accountsservice-libs-0.6.55-1.el8.i686.rpm accountsservice-libs-0.6.55-1.el8.x86_64.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.i686.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.x86_64.rpm atkmm-2.24.2-7.el8.i686.rpm atkmm-2.24.2-7.el8.x86_64.rpm atkmm-debuginfo-2.24.2-7.el8.i686.rpm atkmm-debuginfo-2.24.2-7.el8.x86_64.rpm atkmm-debugsource-2.24.2-7.el8.i686.rpm atkmm-debugsource-2.24.2-7.el8.x86_64.rpm cairomm-1.12.0-8.el8.i686.rpm cairomm-1.12.0-8.el8.x86_64.rpm cairomm-debuginfo-1.12.0-8.el8.i686.rpm cairomm-debuginfo-1.12.0-8.el8.x86_64.rpm cairomm-debugsource-1.12.0-8.el8.i686.rpm cairomm-debugsource-1.12.0-8.el8.x86_64.rpm chrome-gnome-shell-10.1-7.el8.x86_64.rpm dleyna-core-0.6.0-3.el8.i686.rpm dleyna-core-0.6.0-3.el8.x86_64.rpm dleyna-core-debuginfo-0.6.0-3.el8.i686.rpm dleyna-core-debuginfo-0.6.0-3.el8.x86_64.rpm dleyna-core-debugsource-0.6.0-3.el8.i686.rpm dleyna-core-debugsource-0.6.0-3.el8.x86_64.rpm dleyna-server-0.6.0-3.el8.x86_64.rpm dleyna-server-debuginfo-0.6.0-3.el8.x86_64.rpm dleyna-server-debugsource-0.6.0-3.el8.x86_64.rpm enchant2-2.2.3-3.el8.i686.rpm enchant2-2.2.3-3.el8.x86_64.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.i686.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.x86_64.rpm enchant2-debuginfo-2.2.3-3.el8.i686.rpm enchant2-debuginfo-2.2.3-3.el8.x86_64.rpm enchant2-debugsource-2.2.3-3.el8.i686.rpm enchant2-debugsource-2.2.3-3.el8.x86_64.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.i686.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.x86_64.rpm gdm-3.28.3-39.el8.i686.rpm gdm-3.28.3-39.el8.x86_64.rpm gdm-debuginfo-3.28.3-39.el8.i686.rpm gdm-debuginfo-3.28.3-39.el8.x86_64.rpm gdm-debugsource-3.28.3-39.el8.i686.rpm gdm-debugsource-3.28.3-39.el8.x86_64.rpm geoclue2-2.5.5-2.el8.i686.rpm geoclue2-2.5.5-2.el8.x86_64.rpm geoclue2-debuginfo-2.5.5-2.el8.i686.rpm geoclue2-debuginfo-2.5.5-2.el8.x86_64.rpm geoclue2-debugsource-2.5.5-2.el8.i686.rpm geoclue2-debugsource-2.5.5-2.el8.x86_64.rpm geoclue2-demos-2.5.5-2.el8.x86_64.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.i686.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.x86_64.rpm geoclue2-libs-2.5.5-2.el8.i686.rpm geoclue2-libs-2.5.5-2.el8.x86_64.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.i686.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.x86_64.rpm geocode-glib-3.26.0-3.el8.i686.rpm geocode-glib-3.26.0-3.el8.x86_64.rpm geocode-glib-debuginfo-3.26.0-3.el8.i686.rpm geocode-glib-debuginfo-3.26.0-3.el8.x86_64.rpm geocode-glib-debugsource-3.26.0-3.el8.i686.rpm geocode-glib-debugsource-3.26.0-3.el8.x86_64.rpm geocode-glib-devel-3.26.0-3.el8.i686.rpm geocode-glib-devel-3.26.0-3.el8.x86_64.rpm gjs-1.56.2-5.el8.i686.rpm gjs-1.56.2-5.el8.x86_64.rpm gjs-debuginfo-1.56.2-5.el8.i686.rpm gjs-debuginfo-1.56.2-5.el8.x86_64.rpm gjs-debugsource-1.56.2-5.el8.i686.rpm gjs-debugsource-1.56.2-5.el8.x86_64.rpm gjs-tests-debuginfo-1.56.2-5.el8.i686.rpm gjs-tests-debuginfo-1.56.2-5.el8.x86_64.rpm glibmm24-2.56.0-2.el8.i686.rpm glibmm24-2.56.0-2.el8.x86_64.rpm glibmm24-debuginfo-2.56.0-2.el8.i686.rpm glibmm24-debuginfo-2.56.0-2.el8.x86_64.rpm glibmm24-debugsource-2.56.0-2.el8.i686.rpm glibmm24-debugsource-2.56.0-2.el8.x86_64.rpm gnome-boxes-3.36.5-8.el8.x86_64.rpm gnome-boxes-debuginfo-3.36.5-8.el8.x86_64.rpm gnome-boxes-debugsource-3.36.5-8.el8.x86_64.rpm gnome-control-center-3.28.2-27.el8.x86_64.rpm gnome-control-center-debuginfo-3.28.2-27.el8.x86_64.rpm gnome-control-center-debugsource-3.28.2-27.el8.x86_64.rpm gnome-online-accounts-3.28.2-2.el8.i686.rpm gnome-online-accounts-3.28.2-2.el8.x86_64.rpm gnome-online-accounts-debuginfo-3.28.2-2.el8.i686.rpm gnome-online-accounts-debuginfo-3.28.2-2.el8.x86_64.rpm gnome-online-accounts-debugsource-3.28.2-2.el8.i686.rpm gnome-online-accounts-debugsource-3.28.2-2.el8.x86_64.rpm gnome-online-accounts-devel-3.28.2-2.el8.i686.rpm gnome-online-accounts-devel-3.28.2-2.el8.x86_64.rpm gnome-photos-3.28.1-4.el8.x86_64.rpm gnome-photos-debuginfo-3.28.1-4.el8.x86_64.rpm gnome-photos-debugsource-3.28.1-4.el8.x86_64.rpm gnome-photos-tests-3.28.1-4.el8.x86_64.rpm gnome-settings-daemon-3.32.0-14.el8.x86_64.rpm gnome-settings-daemon-debuginfo-3.32.0-14.el8.x86_64.rpm gnome-settings-daemon-debugsource-3.32.0-14.el8.x86_64.rpm gnome-shell-3.32.2-30.el8.x86_64.rpm gnome-shell-debuginfo-3.32.2-30.el8.x86_64.rpm gnome-shell-debugsource-3.32.2-30.el8.x86_64.rpm gnome-software-3.36.1-5.el8.x86_64.rpm gnome-software-debuginfo-3.36.1-5.el8.x86_64.rpm gnome-software-debugsource-3.36.1-5.el8.x86_64.rpm gnome-terminal-3.28.3-3.el8.x86_64.rpm gnome-terminal-debuginfo-3.28.3-3.el8.x86_64.rpm gnome-terminal-debugsource-3.28.3-3.el8.x86_64.rpm gnome-terminal-nautilus-3.28.3-3.el8.x86_64.rpm gnome-terminal-nautilus-debuginfo-3.28.3-3.el8.x86_64.rpm gtk2-2.24.32-5.el8.i686.rpm gtk2-2.24.32-5.el8.x86_64.rpm gtk2-debuginfo-2.24.32-5.el8.i686.rpm gtk2-debuginfo-2.24.32-5.el8.x86_64.rpm gtk2-debugsource-2.24.32-5.el8.i686.rpm gtk2-debugsource-2.24.32-5.el8.x86_64.rpm gtk2-devel-2.24.32-5.el8.i686.rpm gtk2-devel-2.24.32-5.el8.x86_64.rpm gtk2-devel-debuginfo-2.24.32-5.el8.i686.rpm gtk2-devel-debuginfo-2.24.32-5.el8.x86_64.rpm gtk2-devel-docs-2.24.32-5.el8.x86_64.rpm gtk2-immodule-xim-2.24.32-5.el8.i686.rpm gtk2-immodule-xim-2.24.32-5.el8.x86_64.rpm gtk2-immodule-xim-debuginfo-2.24.32-5.el8.i686.rpm gtk2-immodule-xim-debuginfo-2.24.32-5.el8.x86_64.rpm gtk2-immodules-2.24.32-5.el8.i686.rpm gtk2-immodules-2.24.32-5.el8.x86_64.rpm gtk2-immodules-debuginfo-2.24.32-5.el8.i686.rpm gtk2-immodules-debuginfo-2.24.32-5.el8.x86_64.rpm gtkmm24-2.24.5-6.el8.i686.rpm gtkmm24-2.24.5-6.el8.x86_64.rpm gtkmm24-debuginfo-2.24.5-6.el8.i686.rpm gtkmm24-debuginfo-2.24.5-6.el8.x86_64.rpm gtkmm24-debugsource-2.24.5-6.el8.i686.rpm gtkmm24-debugsource-2.24.5-6.el8.x86_64.rpm gtkmm30-3.22.2-3.el8.i686.rpm gtkmm30-3.22.2-3.el8.x86_64.rpm gtkmm30-debuginfo-3.22.2-3.el8.i686.rpm gtkmm30-debuginfo-3.22.2-3.el8.x86_64.rpm gtkmm30-debugsource-3.22.2-3.el8.i686.rpm gtkmm30-debugsource-3.22.2-3.el8.x86_64.rpm gvfs-1.36.2-11.el8.x86_64.rpm gvfs-afc-1.36.2-11.el8.x86_64.rpm gvfs-afc-debuginfo-1.36.2-11.el8.i686.rpm gvfs-afc-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-afp-1.36.2-11.el8.x86_64.rpm gvfs-afp-debuginfo-1.36.2-11.el8.i686.rpm gvfs-afp-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-archive-1.36.2-11.el8.x86_64.rpm gvfs-archive-debuginfo-1.36.2-11.el8.i686.rpm gvfs-archive-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-client-1.36.2-11.el8.i686.rpm gvfs-client-1.36.2-11.el8.x86_64.rpm gvfs-client-debuginfo-1.36.2-11.el8.i686.rpm gvfs-client-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-debuginfo-1.36.2-11.el8.i686.rpm gvfs-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-debugsource-1.36.2-11.el8.i686.rpm gvfs-debugsource-1.36.2-11.el8.x86_64.rpm gvfs-devel-1.36.2-11.el8.i686.rpm gvfs-devel-1.36.2-11.el8.x86_64.rpm gvfs-fuse-1.36.2-11.el8.x86_64.rpm gvfs-fuse-debuginfo-1.36.2-11.el8.i686.rpm gvfs-fuse-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-goa-1.36.2-11.el8.x86_64.rpm gvfs-goa-debuginfo-1.36.2-11.el8.i686.rpm gvfs-goa-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-gphoto2-1.36.2-11.el8.x86_64.rpm gvfs-gphoto2-debuginfo-1.36.2-11.el8.i686.rpm gvfs-gphoto2-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-mtp-1.36.2-11.el8.x86_64.rpm gvfs-mtp-debuginfo-1.36.2-11.el8.i686.rpm gvfs-mtp-debuginfo-1.36.2-11.el8.x86_64.rpm gvfs-smb-1.36.2-11.el8.x86_64.rpm gvfs-smb-debuginfo-1.36.2-11.el8.i686.rpm gvfs-smb-debuginfo-1.36.2-11.el8.x86_64.rpm libdazzle-3.28.5-2.el8.i686.rpm libdazzle-3.28.5-2.el8.x86_64.rpm libdazzle-debuginfo-3.28.5-2.el8.i686.rpm libdazzle-debuginfo-3.28.5-2.el8.x86_64.rpm libdazzle-debugsource-3.28.5-2.el8.i686.rpm libdazzle-debugsource-3.28.5-2.el8.x86_64.rpm libepubgen-0.1.0-3.el8.i686.rpm libepubgen-0.1.0-3.el8.x86_64.rpm libepubgen-debuginfo-0.1.0-3.el8.i686.rpm libepubgen-debuginfo-0.1.0-3.el8.x86_64.rpm libepubgen-debugsource-0.1.0-3.el8.i686.rpm libepubgen-debugsource-0.1.0-3.el8.x86_64.rpm libsigc++20-2.10.0-6.el8.i686.rpm libsigc++20-2.10.0-6.el8.x86_64.rpm libsigc++20-debuginfo-2.10.0-6.el8.i686.rpm libsigc++20-debuginfo-2.10.0-6.el8.x86_64.rpm libsigc++20-debugsource-2.10.0-6.el8.i686.rpm libsigc++20-debugsource-2.10.0-6.el8.x86_64.rpm libvisual-0.4.0-25.el8.i686.rpm libvisual-0.4.0-25.el8.x86_64.rpm libvisual-debuginfo-0.4.0-25.el8.i686.rpm libvisual-debuginfo-0.4.0-25.el8.x86_64.rpm libvisual-debugsource-0.4.0-25.el8.i686.rpm libvisual-debugsource-0.4.0-25.el8.x86_64.rpm mutter-3.32.2-57.el8.i686.rpm mutter-3.32.2-57.el8.x86_64.rpm mutter-debuginfo-3.32.2-57.el8.i686.rpm mutter-debuginfo-3.32.2-57.el8.x86_64.rpm mutter-debugsource-3.32.2-57.el8.i686.rpm mutter-debugsource-3.32.2-57.el8.x86_64.rpm mutter-tests-debuginfo-3.32.2-57.el8.i686.rpm mutter-tests-debuginfo-3.32.2-57.el8.x86_64.rpm nautilus-3.28.1-15.el8.x86_64.rpm nautilus-debuginfo-3.28.1-15.el8.i686.rpm nautilus-debuginfo-3.28.1-15.el8.x86_64.rpm nautilus-debugsource-3.28.1-15.el8.i686.rpm nautilus-debugsource-3.28.1-15.el8.x86_64.rpm nautilus-extensions-3.28.1-15.el8.i686.rpm nautilus-extensions-3.28.1-15.el8.x86_64.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.i686.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.x86_64.rpm pangomm-2.40.1-6.el8.i686.rpm pangomm-2.40.1-6.el8.x86_64.rpm pangomm-debuginfo-2.40.1-6.el8.i686.rpm pangomm-debuginfo-2.40.1-6.el8.x86_64.rpm pangomm-debugsource-2.40.1-6.el8.i686.rpm pangomm-debugsource-2.40.1-6.el8.x86_64.rpm soundtouch-2.0.0-3.el8.i686.rpm soundtouch-2.0.0-3.el8.x86_64.rpm soundtouch-debuginfo-2.0.0-3.el8.i686.rpm soundtouch-debuginfo-2.0.0-3.el8.x86_64.rpm soundtouch-debugsource-2.0.0-3.el8.i686.rpm soundtouch-debugsource-2.0.0-3.el8.x86_64.rpm webkit2gtk3-2.30.4-1.el8.i686.rpm webkit2gtk3-2.30.4-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.30.4-1.el8.i686.rpm webkit2gtk3-debuginfo-2.30.4-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.30.4-1.el8.i686.rpm webkit2gtk3-debugsource-2.30.4-1.el8.x86_64.rpm webkit2gtk3-devel-2.30.4-1.el8.i686.rpm webkit2gtk3-devel-2.30.4-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.30.4-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.30.4-1.el8.x86_64.rpm webkit2gtk3-jsc-2.30.4-1.el8.i686.rpm webkit2gtk3-jsc-2.30.4-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.30.4-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.30.4-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.30.4-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.30.4-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.x86_64.rpm woff2-1.0.2-5.el8.i686.rpm woff2-1.0.2-5.el8.x86_64.rpm woff2-debuginfo-1.0.2-5.el8.i686.rpm woff2-debuginfo-1.0.2-5.el8.x86_64.rpm woff2-debugsource-1.0.2-5.el8.i686.rpm woff2-debugsource-1.0.2-5.el8.x86_64.rpm

Red Hat Enterprise Linux BaseOS (v. 8):

Source: gamin-0.1.10-32.el8.src.rpm glib2-2.56.4-9.el8.src.rpm

aarch64: gamin-0.1.10-32.el8.aarch64.rpm gamin-debuginfo-0.1.10-32.el8.aarch64.rpm gamin-debugsource-0.1.10-32.el8.aarch64.rpm glib2-2.56.4-9.el8.aarch64.rpm glib2-debuginfo-2.56.4-9.el8.aarch64.rpm glib2-debugsource-2.56.4-9.el8.aarch64.rpm glib2-devel-2.56.4-9.el8.aarch64.rpm glib2-devel-debuginfo-2.56.4-9.el8.aarch64.rpm glib2-fam-2.56.4-9.el8.aarch64.rpm glib2-fam-debuginfo-2.56.4-9.el8.aarch64.rpm glib2-tests-2.56.4-9.el8.aarch64.rpm glib2-tests-debuginfo-2.56.4-9.el8.aarch64.rpm

ppc64le: gamin-0.1.10-32.el8.ppc64le.rpm gamin-debuginfo-0.1.10-32.el8.ppc64le.rpm gamin-debugsource-0.1.10-32.el8.ppc64le.rpm glib2-2.56.4-9.el8.ppc64le.rpm glib2-debuginfo-2.56.4-9.el8.ppc64le.rpm glib2-debugsource-2.56.4-9.el8.ppc64le.rpm glib2-devel-2.56.4-9.el8.ppc64le.rpm glib2-devel-debuginfo-2.56.4-9.el8.ppc64le.rpm glib2-fam-2.56.4-9.el8.ppc64le.rpm glib2-fam-debuginfo-2.56.4-9.el8.ppc64le.rpm glib2-tests-2.56.4-9.el8.ppc64le.rpm glib2-tests-debuginfo-2.56.4-9.el8.ppc64le.rpm

s390x: gamin-0.1.10-32.el8.s390x.rpm gamin-debuginfo-0.1.10-32.el8.s390x.rpm gamin-debugsource-0.1.10-32.el8.s390x.rpm glib2-2.56.4-9.el8.s390x.rpm glib2-debuginfo-2.56.4-9.el8.s390x.rpm glib2-debugsource-2.56.4-9.el8.s390x.rpm glib2-devel-2.56.4-9.el8.s390x.rpm glib2-devel-debuginfo-2.56.4-9.el8.s390x.rpm glib2-fam-2.56.4-9.el8.s390x.rpm glib2-fam-debuginfo-2.56.4-9.el8.s390x.rpm glib2-tests-2.56.4-9.el8.s390x.rpm glib2-tests-debuginfo-2.56.4-9.el8.s390x.rpm

x86_64: gamin-0.1.10-32.el8.i686.rpm gamin-0.1.10-32.el8.x86_64.rpm gamin-debuginfo-0.1.10-32.el8.i686.rpm gamin-debuginfo-0.1.10-32.el8.x86_64.rpm gamin-debugsource-0.1.10-32.el8.i686.rpm gamin-debugsource-0.1.10-32.el8.x86_64.rpm glib2-2.56.4-9.el8.i686.rpm glib2-2.56.4-9.el8.x86_64.rpm glib2-debuginfo-2.56.4-9.el8.i686.rpm glib2-debuginfo-2.56.4-9.el8.x86_64.rpm glib2-debugsource-2.56.4-9.el8.i686.rpm glib2-debugsource-2.56.4-9.el8.x86_64.rpm glib2-devel-2.56.4-9.el8.i686.rpm glib2-devel-2.56.4-9.el8.x86_64.rpm glib2-devel-debuginfo-2.56.4-9.el8.i686.rpm glib2-devel-debuginfo-2.56.4-9.el8.x86_64.rpm glib2-fam-2.56.4-9.el8.x86_64.rpm glib2-fam-debuginfo-2.56.4-9.el8.i686.rpm glib2-fam-debuginfo-2.56.4-9.el8.x86_64.rpm glib2-tests-2.56.4-9.el8.x86_64.rpm glib2-tests-debuginfo-2.56.4-9.el8.i686.rpm glib2-tests-debuginfo-2.56.4-9.el8.x86_64.rpm

Red Hat CodeReady Linux Builder (v. 8):

Source: gtk-doc-1.28-3.el8.src.rpm libdazzle-3.28.5-2.el8.src.rpm libepubgen-0.1.0-3.el8.src.rpm libsass-3.4.5-6.el8.src.rpm vala-0.40.19-2.el8.src.rpm

aarch64: OpenEXR-debuginfo-2.2.0-12.el8.aarch64.rpm OpenEXR-debugsource-2.2.0-12.el8.aarch64.rpm OpenEXR-devel-2.2.0-12.el8.aarch64.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.aarch64.rpm accountsservice-debuginfo-0.6.55-1.el8.aarch64.rpm accountsservice-debugsource-0.6.55-1.el8.aarch64.rpm accountsservice-devel-0.6.55-1.el8.aarch64.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.aarch64.rpm atkmm-debuginfo-2.24.2-7.el8.aarch64.rpm atkmm-debugsource-2.24.2-7.el8.aarch64.rpm atkmm-devel-2.24.2-7.el8.aarch64.rpm cairomm-debuginfo-1.12.0-8.el8.aarch64.rpm cairomm-debugsource-1.12.0-8.el8.aarch64.rpm cairomm-devel-1.12.0-8.el8.aarch64.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.aarch64.rpm enchant2-debuginfo-2.2.3-3.el8.aarch64.rpm enchant2-debugsource-2.2.3-3.el8.aarch64.rpm enchant2-devel-2.2.3-3.el8.aarch64.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.aarch64.rpm gamin-debuginfo-0.1.10-32.el8.aarch64.rpm gamin-debugsource-0.1.10-32.el8.aarch64.rpm gamin-devel-0.1.10-32.el8.aarch64.rpm geoclue2-debuginfo-2.5.5-2.el8.aarch64.rpm geoclue2-debugsource-2.5.5-2.el8.aarch64.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.aarch64.rpm geoclue2-devel-2.5.5-2.el8.aarch64.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.aarch64.rpm gjs-debuginfo-1.56.2-5.el8.aarch64.rpm gjs-debugsource-1.56.2-5.el8.aarch64.rpm gjs-devel-1.56.2-5.el8.aarch64.rpm gjs-tests-debuginfo-1.56.2-5.el8.aarch64.rpm glib2-debuginfo-2.56.4-9.el8.aarch64.rpm glib2-debugsource-2.56.4-9.el8.aarch64.rpm glib2-devel-debuginfo-2.56.4-9.el8.aarch64.rpm glib2-fam-debuginfo-2.56.4-9.el8.aarch64.rpm glib2-static-2.56.4-9.el8.aarch64.rpm glib2-tests-debuginfo-2.56.4-9.el8.aarch64.rpm glibmm24-debuginfo-2.56.0-2.el8.aarch64.rpm glibmm24-debugsource-2.56.0-2.el8.aarch64.rpm glibmm24-devel-2.56.0-2.el8.aarch64.rpm gtk-doc-1.28-3.el8.aarch64.rpm gtkmm24-debuginfo-2.24.5-6.el8.aarch64.rpm gtkmm24-debugsource-2.24.5-6.el8.aarch64.rpm gtkmm24-devel-2.24.5-6.el8.aarch64.rpm gtkmm30-debuginfo-3.22.2-3.el8.aarch64.rpm gtkmm30-debugsource-3.22.2-3.el8.aarch64.rpm gtkmm30-devel-3.22.2-3.el8.aarch64.rpm libdazzle-3.28.5-2.el8.aarch64.rpm libdazzle-debuginfo-3.28.5-2.el8.aarch64.rpm libdazzle-debugsource-3.28.5-2.el8.aarch64.rpm libdazzle-devel-3.28.5-2.el8.aarch64.rpm libepubgen-0.1.0-3.el8.aarch64.rpm libepubgen-debuginfo-0.1.0-3.el8.aarch64.rpm libepubgen-debugsource-0.1.0-3.el8.aarch64.rpm libepubgen-devel-0.1.0-3.el8.aarch64.rpm libsass-3.4.5-6.el8.aarch64.rpm libsass-debuginfo-3.4.5-6.el8.aarch64.rpm libsass-debugsource-3.4.5-6.el8.aarch64.rpm libsass-devel-3.4.5-6.el8.aarch64.rpm libsigc++20-debuginfo-2.10.0-6.el8.aarch64.rpm libsigc++20-debugsource-2.10.0-6.el8.aarch64.rpm libsigc++20-devel-2.10.0-6.el8.aarch64.rpm libvisual-debuginfo-0.4.0-25.el8.aarch64.rpm libvisual-debugsource-0.4.0-25.el8.aarch64.rpm libvisual-devel-0.4.0-25.el8.aarch64.rpm mutter-debuginfo-3.32.2-57.el8.aarch64.rpm mutter-debugsource-3.32.2-57.el8.aarch64.rpm mutter-devel-3.32.2-57.el8.aarch64.rpm mutter-tests-debuginfo-3.32.2-57.el8.aarch64.rpm nautilus-debuginfo-3.28.1-15.el8.aarch64.rpm nautilus-debugsource-3.28.1-15.el8.aarch64.rpm nautilus-devel-3.28.1-15.el8.aarch64.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.aarch64.rpm pangomm-debuginfo-2.40.1-6.el8.aarch64.rpm pangomm-debugsource-2.40.1-6.el8.aarch64.rpm pangomm-devel-2.40.1-6.el8.aarch64.rpm soundtouch-debuginfo-2.0.0-3.el8.aarch64.rpm soundtouch-debugsource-2.0.0-3.el8.aarch64.rpm soundtouch-devel-2.0.0-3.el8.aarch64.rpm vala-0.40.19-2.el8.aarch64.rpm vala-debuginfo-0.40.19-2.el8.aarch64.rpm vala-debugsource-0.40.19-2.el8.aarch64.rpm vala-devel-0.40.19-2.el8.aarch64.rpm valadoc-debuginfo-0.40.19-2.el8.aarch64.rpm woff2-debuginfo-1.0.2-5.el8.aarch64.rpm woff2-debugsource-1.0.2-5.el8.aarch64.rpm woff2-devel-1.0.2-5.el8.aarch64.rpm

noarch: atkmm-doc-2.24.2-7.el8.noarch.rpm cairomm-doc-1.12.0-8.el8.noarch.rpm glib2-doc-2.56.4-9.el8.noarch.rpm glibmm24-doc-2.56.0-2.el8.noarch.rpm gtkmm24-docs-2.24.5-6.el8.noarch.rpm gtkmm30-doc-3.22.2-3.el8.noarch.rpm libsigc++20-doc-2.10.0-6.el8.noarch.rpm pangomm-doc-2.40.1-6.el8.noarch.rpm

ppc64le: OpenEXR-debuginfo-2.2.0-12.el8.ppc64le.rpm OpenEXR-debugsource-2.2.0-12.el8.ppc64le.rpm OpenEXR-devel-2.2.0-12.el8.ppc64le.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.ppc64le.rpm accountsservice-debuginfo-0.6.55-1.el8.ppc64le.rpm accountsservice-debugsource-0.6.55-1.el8.ppc64le.rpm accountsservice-devel-0.6.55-1.el8.ppc64le.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.ppc64le.rpm atkmm-debuginfo-2.24.2-7.el8.ppc64le.rpm atkmm-debugsource-2.24.2-7.el8.ppc64le.rpm atkmm-devel-2.24.2-7.el8.ppc64le.rpm cairomm-debuginfo-1.12.0-8.el8.ppc64le.rpm cairomm-debugsource-1.12.0-8.el8.ppc64le.rpm cairomm-devel-1.12.0-8.el8.ppc64le.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.ppc64le.rpm enchant2-debuginfo-2.2.3-3.el8.ppc64le.rpm enchant2-debugsource-2.2.3-3.el8.ppc64le.rpm enchant2-devel-2.2.3-3.el8.ppc64le.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.ppc64le.rpm gamin-debuginfo-0.1.10-32.el8.ppc64le.rpm gamin-debugsource-0.1.10-32.el8.ppc64le.rpm gamin-devel-0.1.10-32.el8.ppc64le.rpm geoclue2-debuginfo-2.5.5-2.el8.ppc64le.rpm geoclue2-debugsource-2.5.5-2.el8.ppc64le.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.ppc64le.rpm geoclue2-devel-2.5.5-2.el8.ppc64le.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.ppc64le.rpm gjs-debuginfo-1.56.2-5.el8.ppc64le.rpm gjs-debugsource-1.56.2-5.el8.ppc64le.rpm gjs-devel-1.56.2-5.el8.ppc64le.rpm gjs-tests-debuginfo-1.56.2-5.el8.ppc64le.rpm glib2-debuginfo-2.56.4-9.el8.ppc64le.rpm glib2-debugsource-2.56.4-9.el8.ppc64le.rpm glib2-devel-debuginfo-2.56.4-9.el8.ppc64le.rpm glib2-fam-debuginfo-2.56.4-9.el8.ppc64le.rpm glib2-static-2.56.4-9.el8.ppc64le.rpm glib2-tests-debuginfo-2.56.4-9.el8.ppc64le.rpm glibmm24-debuginfo-2.56.0-2.el8.ppc64le.rpm glibmm24-debugsource-2.56.0-2.el8.ppc64le.rpm glibmm24-devel-2.56.0-2.el8.ppc64le.rpm gtk-doc-1.28-3.el8.ppc64le.rpm gtkmm24-debuginfo-2.24.5-6.el8.ppc64le.rpm gtkmm24-debugsource-2.24.5-6.el8.ppc64le.rpm gtkmm24-devel-2.24.5-6.el8.ppc64le.rpm gtkmm30-debuginfo-3.22.2-3.el8.ppc64le.rpm gtkmm30-debugsource-3.22.2-3.el8.ppc64le.rpm gtkmm30-devel-3.22.2-3.el8.ppc64le.rpm libdazzle-debuginfo-3.28.5-2.el8.ppc64le.rpm libdazzle-debugsource-3.28.5-2.el8.ppc64le.rpm libdazzle-devel-3.28.5-2.el8.ppc64le.rpm libepubgen-debuginfo-0.1.0-3.el8.ppc64le.rpm libepubgen-debugsource-0.1.0-3.el8.ppc64le.rpm libepubgen-devel-0.1.0-3.el8.ppc64le.rpm libsass-3.4.5-6.el8.ppc64le.rpm libsass-debuginfo-3.4.5-6.el8.ppc64le.rpm libsass-debugsource-3.4.5-6.el8.ppc64le.rpm libsass-devel-3.4.5-6.el8.ppc64le.rpm libsigc++20-debuginfo-2.10.0-6.el8.ppc64le.rpm libsigc++20-debugsource-2.10.0-6.el8.ppc64le.rpm libsigc++20-devel-2.10.0-6.el8.ppc64le.rpm libvisual-debuginfo-0.4.0-25.el8.ppc64le.rpm libvisual-debugsource-0.4.0-25.el8.ppc64le.rpm libvisual-devel-0.4.0-25.el8.ppc64le.rpm mutter-debuginfo-3.32.2-57.el8.ppc64le.rpm mutter-debugsource-3.32.2-57.el8.ppc64le.rpm mutter-devel-3.32.2-57.el8.ppc64le.rpm mutter-tests-debuginfo-3.32.2-57.el8.ppc64le.rpm nautilus-debuginfo-3.28.1-15.el8.ppc64le.rpm nautilus-debugsource-3.28.1-15.el8.ppc64le.rpm nautilus-devel-3.28.1-15.el8.ppc64le.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.ppc64le.rpm pangomm-debuginfo-2.40.1-6.el8.ppc64le.rpm pangomm-debugsource-2.40.1-6.el8.ppc64le.rpm pangomm-devel-2.40.1-6.el8.ppc64le.rpm soundtouch-debuginfo-2.0.0-3.el8.ppc64le.rpm soundtouch-debugsource-2.0.0-3.el8.ppc64le.rpm soundtouch-devel-2.0.0-3.el8.ppc64le.rpm vala-0.40.19-2.el8.ppc64le.rpm vala-debuginfo-0.40.19-2.el8.ppc64le.rpm vala-debugsource-0.40.19-2.el8.ppc64le.rpm vala-devel-0.40.19-2.el8.ppc64le.rpm valadoc-debuginfo-0.40.19-2.el8.ppc64le.rpm woff2-debuginfo-1.0.2-5.el8.ppc64le.rpm woff2-debugsource-1.0.2-5.el8.ppc64le.rpm woff2-devel-1.0.2-5.el8.ppc64le.rpm

s390x: OpenEXR-debuginfo-2.2.0-12.el8.s390x.rpm OpenEXR-debugsource-2.2.0-12.el8.s390x.rpm OpenEXR-devel-2.2.0-12.el8.s390x.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.s390x.rpm accountsservice-debuginfo-0.6.55-1.el8.s390x.rpm accountsservice-debugsource-0.6.55-1.el8.s390x.rpm accountsservice-devel-0.6.55-1.el8.s390x.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.s390x.rpm atkmm-debuginfo-2.24.2-7.el8.s390x.rpm atkmm-debugsource-2.24.2-7.el8.s390x.rpm atkmm-devel-2.24.2-7.el8.s390x.rpm cairomm-debuginfo-1.12.0-8.el8.s390x.rpm cairomm-debugsource-1.12.0-8.el8.s390x.rpm cairomm-devel-1.12.0-8.el8.s390x.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.s390x.rpm enchant2-debuginfo-2.2.3-3.el8.s390x.rpm enchant2-debugsource-2.2.3-3.el8.s390x.rpm enchant2-devel-2.2.3-3.el8.s390x.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.s390x.rpm gamin-debuginfo-0.1.10-32.el8.s390x.rpm gamin-debugsource-0.1.10-32.el8.s390x.rpm gamin-devel-0.1.10-32.el8.s390x.rpm geoclue2-debuginfo-2.5.5-2.el8.s390x.rpm geoclue2-debugsource-2.5.5-2.el8.s390x.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.s390x.rpm geoclue2-devel-2.5.5-2.el8.s390x.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.s390x.rpm gjs-debuginfo-1.56.2-5.el8.s390x.rpm gjs-debugsource-1.56.2-5.el8.s390x.rpm gjs-devel-1.56.2-5.el8.s390x.rpm gjs-tests-debuginfo-1.56.2-5.el8.s390x.rpm glib2-debuginfo-2.56.4-9.el8.s390x.rpm glib2-debugsource-2.56.4-9.el8.s390x.rpm glib2-devel-debuginfo-2.56.4-9.el8.s390x.rpm glib2-fam-debuginfo-2.56.4-9.el8.s390x.rpm glib2-static-2.56.4-9.el8.s390x.rpm glib2-tests-debuginfo-2.56.4-9.el8.s390x.rpm glibmm24-debuginfo-2.56.0-2.el8.s390x.rpm glibmm24-debugsource-2.56.0-2.el8.s390x.rpm glibmm24-devel-2.56.0-2.el8.s390x.rpm gtk-doc-1.28-3.el8.s390x.rpm gtkmm24-debuginfo-2.24.5-6.el8.s390x.rpm gtkmm24-debugsource-2.24.5-6.el8.s390x.rpm gtkmm24-devel-2.24.5-6.el8.s390x.rpm gtkmm30-debuginfo-3.22.2-3.el8.s390x.rpm gtkmm30-debugsource-3.22.2-3.el8.s390x.rpm gtkmm30-devel-3.22.2-3.el8.s390x.rpm libdazzle-3.28.5-2.el8.s390x.rpm libdazzle-debuginfo-3.28.5-2.el8.s390x.rpm libdazzle-debugsource-3.28.5-2.el8.s390x.rpm libdazzle-devel-3.28.5-2.el8.s390x.rpm libepubgen-0.1.0-3.el8.s390x.rpm libepubgen-debuginfo-0.1.0-3.el8.s390x.rpm libepubgen-debugsource-0.1.0-3.el8.s390x.rpm libepubgen-devel-0.1.0-3.el8.s390x.rpm libsass-3.4.5-6.el8.s390x.rpm libsass-debuginfo-3.4.5-6.el8.s390x.rpm libsass-debugsource-3.4.5-6.el8.s390x.rpm libsass-devel-3.4.5-6.el8.s390x.rpm libsigc++20-debuginfo-2.10.0-6.el8.s390x.rpm libsigc++20-debugsource-2.10.0-6.el8.s390x.rpm libsigc++20-devel-2.10.0-6.el8.s390x.rpm libvisual-debuginfo-0.4.0-25.el8.s390x.rpm libvisual-debugsource-0.4.0-25.el8.s390x.rpm libvisual-devel-0.4.0-25.el8.s390x.rpm mutter-debuginfo-3.32.2-57.el8.s390x.rpm mutter-debugsource-3.32.2-57.el8.s390x.rpm mutter-devel-3.32.2-57.el8.s390x.rpm mutter-tests-debuginfo-3.32.2-57.el8.s390x.rpm nautilus-debuginfo-3.28.1-15.el8.s390x.rpm nautilus-debugsource-3.28.1-15.el8.s390x.rpm nautilus-devel-3.28.1-15.el8.s390x.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.s390x.rpm pangomm-debuginfo-2.40.1-6.el8.s390x.rpm pangomm-debugsource-2.40.1-6.el8.s390x.rpm pangomm-devel-2.40.1-6.el8.s390x.rpm soundtouch-debuginfo-2.0.0-3.el8.s390x.rpm soundtouch-debugsource-2.0.0-3.el8.s390x.rpm soundtouch-devel-2.0.0-3.el8.s390x.rpm vala-0.40.19-2.el8.s390x.rpm vala-debuginfo-0.40.19-2.el8.s390x.rpm vala-debugsource-0.40.19-2.el8.s390x.rpm vala-devel-0.40.19-2.el8.s390x.rpm valadoc-debuginfo-0.40.19-2.el8.s390x.rpm woff2-debuginfo-1.0.2-5.el8.s390x.rpm woff2-debugsource-1.0.2-5.el8.s390x.rpm woff2-devel-1.0.2-5.el8.s390x.rpm

x86_64: OpenEXR-debuginfo-2.2.0-12.el8.i686.rpm OpenEXR-debuginfo-2.2.0-12.el8.x86_64.rpm OpenEXR-debugsource-2.2.0-12.el8.i686.rpm OpenEXR-debugsource-2.2.0-12.el8.x86_64.rpm OpenEXR-devel-2.2.0-12.el8.i686.rpm OpenEXR-devel-2.2.0-12.el8.x86_64.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.i686.rpm OpenEXR-libs-debuginfo-2.2.0-12.el8.x86_64.rpm accountsservice-debuginfo-0.6.55-1.el8.i686.rpm accountsservice-debuginfo-0.6.55-1.el8.x86_64.rpm accountsservice-debugsource-0.6.55-1.el8.i686.rpm accountsservice-debugsource-0.6.55-1.el8.x86_64.rpm accountsservice-devel-0.6.55-1.el8.i686.rpm accountsservice-devel-0.6.55-1.el8.x86_64.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.i686.rpm accountsservice-libs-debuginfo-0.6.55-1.el8.x86_64.rpm atkmm-debuginfo-2.24.2-7.el8.i686.rpm atkmm-debuginfo-2.24.2-7.el8.x86_64.rpm atkmm-debugsource-2.24.2-7.el8.i686.rpm atkmm-debugsource-2.24.2-7.el8.x86_64.rpm atkmm-devel-2.24.2-7.el8.i686.rpm atkmm-devel-2.24.2-7.el8.x86_64.rpm cairomm-debuginfo-1.12.0-8.el8.i686.rpm cairomm-debuginfo-1.12.0-8.el8.x86_64.rpm cairomm-debugsource-1.12.0-8.el8.i686.rpm cairomm-debugsource-1.12.0-8.el8.x86_64.rpm cairomm-devel-1.12.0-8.el8.i686.rpm cairomm-devel-1.12.0-8.el8.x86_64.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.i686.rpm enchant2-aspell-debuginfo-2.2.3-3.el8.x86_64.rpm enchant2-debuginfo-2.2.3-3.el8.i686.rpm enchant2-debuginfo-2.2.3-3.el8.x86_64.rpm enchant2-debugsource-2.2.3-3.el8.i686.rpm enchant2-debugsource-2.2.3-3.el8.x86_64.rpm enchant2-devel-2.2.3-3.el8.i686.rpm enchant2-devel-2.2.3-3.el8.x86_64.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.i686.rpm enchant2-voikko-debuginfo-2.2.3-3.el8.x86_64.rpm gamin-debuginfo-0.1.10-32.el8.i686.rpm gamin-debuginfo-0.1.10-32.el8.x86_64.rpm gamin-debugsource-0.1.10-32.el8.i686.rpm gamin-debugsource-0.1.10-32.el8.x86_64.rpm gamin-devel-0.1.10-32.el8.i686.rpm gamin-devel-0.1.10-32.el8.x86_64.rpm geoclue2-debuginfo-2.5.5-2.el8.i686.rpm geoclue2-debuginfo-2.5.5-2.el8.x86_64.rpm geoclue2-debugsource-2.5.5-2.el8.i686.rpm geoclue2-debugsource-2.5.5-2.el8.x86_64.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.i686.rpm geoclue2-demos-debuginfo-2.5.5-2.el8.x86_64.rpm geoclue2-devel-2.5.5-2.el8.i686.rpm geoclue2-devel-2.5.5-2.el8.x86_64.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.i686.rpm geoclue2-libs-debuginfo-2.5.5-2.el8.x86_64.rpm gjs-debuginfo-1.56.2-5.el8.i686.rpm gjs-debuginfo-1.56.2-5.el8.x86_64.rpm gjs-debugsource-1.56.2-5.el8.i686.rpm gjs-debugsource-1.56.2-5.el8.x86_64.rpm gjs-devel-1.56.2-5.el8.i686.rpm gjs-devel-1.56.2-5.el8.x86_64.rpm gjs-tests-debuginfo-1.56.2-5.el8.i686.rpm gjs-tests-debuginfo-1.56.2-5.el8.x86_64.rpm glib2-debuginfo-2.56.4-9.el8.i686.rpm glib2-debuginfo-2.56.4-9.el8.x86_64.rpm glib2-debugsource-2.56.4-9.el8.i686.rpm glib2-debugsource-2.56.4-9.el8.x86_64.rpm glib2-devel-debuginfo-2.56.4-9.el8.i686.rpm glib2-devel-debuginfo-2.56.4-9.el8.x86_64.rpm glib2-fam-debuginfo-2.56.4-9.el8.i686.rpm glib2-fam-debuginfo-2.56.4-9.el8.x86_64.rpm glib2-static-2.56.4-9.el8.i686.rpm glib2-static-2.56.4-9.el8.x86_64.rpm glib2-tests-debuginfo-2.56.4-9.el8.i686.rpm glib2-tests-debuginfo-2.56.4-9.el8.x86_64.rpm glibmm24-debuginfo-2.56.0-2.el8.i686.rpm glibmm24-debuginfo-2.56.0-2.el8.x86_64.rpm glibmm24-debugsource-2.56.0-2.el8.i686.rpm glibmm24-debugsource-2.56.0-2.el8.x86_64.rpm glibmm24-devel-2.56.0-2.el8.i686.rpm glibmm24-devel-2.56.0-2.el8.x86_64.rpm gtk-doc-1.28-3.el8.x86_64.rpm gtkmm24-debuginfo-2.24.5-6.el8.i686.rpm gtkmm24-debuginfo-2.24.5-6.el8.x86_64.rpm gtkmm24-debugsource-2.24.5-6.el8.i686.rpm gtkmm24-debugsource-2.24.5-6.el8.x86_64.rpm gtkmm24-devel-2.24.5-6.el8.i686.rpm gtkmm24-devel-2.24.5-6.el8.x86_64.rpm gtkmm30-debuginfo-3.22.2-3.el8.i686.rpm gtkmm30-debuginfo-3.22.2-3.el8.x86_64.rpm gtkmm30-debugsource-3.22.2-3.el8.i686.rpm gtkmm30-debugsource-3.22.2-3.el8.x86_64.rpm gtkmm30-devel-3.22.2-3.el8.i686.rpm gtkmm30-devel-3.22.2-3.el8.x86_64.rpm gvfs-1.36.2-11.el8.i686.rpm gvfs-afc-debuginfo-1.36.2-11.el8.i686.rpm gvfs-afp-debuginfo-1.36.2-11.el8.i686.rpm gvfs-archive-debuginfo-1.36.2-11.el8.i686.rpm gvfs-client-debuginfo-1.36.2-11.el8.i686.rpm gvfs-debuginfo-1.36.2-11.el8.i686.rpm gvfs-debugsource-1.36.2-11.el8.i686.rpm gvfs-fuse-debuginfo-1.36.2-11.el8.i686.rpm gvfs-goa-debuginfo-1.36.2-11.el8.i686.rpm gvfs-gphoto2-debuginfo-1.36.2-11.el8.i686.rpm gvfs-mtp-debuginfo-1.36.2-11.el8.i686.rpm gvfs-smb-debuginfo-1.36.2-11.el8.i686.rpm libdazzle-debuginfo-3.28.5-2.el8.i686.rpm libdazzle-debuginfo-3.28.5-2.el8.x86_64.rpm libdazzle-debugsource-3.28.5-2.el8.i686.rpm libdazzle-debugsource-3.28.5-2.el8.x86_64.rpm libdazzle-devel-3.28.5-2.el8.i686.rpm libdazzle-devel-3.28.5-2.el8.x86_64.rpm libepubgen-debuginfo-0.1.0-3.el8.i686.rpm libepubgen-debuginfo-0.1.0-3.el8.x86_64.rpm libepubgen-debugsource-0.1.0-3.el8.i686.rpm libepubgen-debugsource-0.1.0-3.el8.x86_64.rpm libepubgen-devel-0.1.0-3.el8.i686.rpm libepubgen-devel-0.1.0-3.el8.x86_64.rpm libsass-3.4.5-6.el8.i686.rpm libsass-3.4.5-6.el8.x86_64.rpm libsass-debuginfo-3.4.5-6.el8.i686.rpm libsass-debuginfo-3.4.5-6.el8.x86_64.rpm libsass-debugsource-3.4.5-6.el8.i686.rpm libsass-debugsource-3.4.5-6.el8.x86_64.rpm libsass-devel-3.4.5-6.el8.i686.rpm libsass-devel-3.4.5-6.el8.x86_64.rpm libsigc++20-debuginfo-2.10.0-6.el8.i686.rpm libsigc++20-debuginfo-2.10.0-6.el8.x86_64.rpm libsigc++20-debugsource-2.10.0-6.el8.i686.rpm libsigc++20-debugsource-2.10.0-6.el8.x86_64.rpm libsigc++20-devel-2.10.0-6.el8.i686.rpm libsigc++20-devel-2.10.0-6.el8.x86_64.rpm libvisual-debuginfo-0.4.0-25.el8.i686.rpm libvisual-debuginfo-0.4.0-25.el8.x86_64.rpm libvisual-debugsource-0.4.0-25.el8.i686.rpm libvisual-debugsource-0.4.0-25.el8.x86_64.rpm libvisual-devel-0.4.0-25.el8.i686.rpm libvisual-devel-0.4.0-25.el8.x86_64.rpm mutter-debuginfo-3.32.2-57.el8.i686.rpm mutter-debuginfo-3.32.2-57.el8.x86_64.rpm mutter-debugsource-3.32.2-57.el8.i686.rpm mutter-debugsource-3.32.2-57.el8.x86_64.rpm mutter-devel-3.32.2-57.el8.i686.rpm mutter-devel-3.32.2-57.el8.x86_64.rpm mutter-tests-debuginfo-3.32.2-57.el8.i686.rpm mutter-tests-debuginfo-3.32.2-57.el8.x86_64.rpm nautilus-3.28.1-15.el8.i686.rpm nautilus-debuginfo-3.28.1-15.el8.i686.rpm nautilus-debuginfo-3.28.1-15.el8.x86_64.rpm nautilus-debugsource-3.28.1-15.el8.i686.rpm nautilus-debugsource-3.28.1-15.el8.x86_64.rpm nautilus-devel-3.28.1-15.el8.i686.rpm nautilus-devel-3.28.1-15.el8.x86_64.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.i686.rpm nautilus-extensions-debuginfo-3.28.1-15.el8.x86_64.rpm pangomm-debuginfo-2.40.1-6.el8.i686.rpm pangomm-debuginfo-2.40.1-6.el8.x86_64.rpm pangomm-debugsource-2.40.1-6.el8.i686.rpm pangomm-debugsource-2.40.1-6.el8.x86_64.rpm pangomm-devel-2.40.1-6.el8.i686.rpm pangomm-devel-2.40.1-6.el8.x86_64.rpm soundtouch-debuginfo-2.0.0-3.el8.i686.rpm soundtouch-debuginfo-2.0.0-3.el8.x86_64.rpm soundtouch-debugsource-2.0.0-3.el8.i686.rpm soundtouch-debugsource-2.0.0-3.el8.x86_64.rpm soundtouch-devel-2.0.0-3.el8.i686.rpm soundtouch-devel-2.0.0-3.el8.x86_64.rpm vala-0.40.19-2.el8.i686.rpm vala-0.40.19-2.el8.x86_64.rpm vala-debuginfo-0.40.19-2.el8.i686.rpm vala-debuginfo-0.40.19-2.el8.x86_64.rpm vala-debugsource-0.40.19-2.el8.i686.rpm vala-debugsource-0.40.19-2.el8.x86_64.rpm vala-devel-0.40.19-2.el8.i686.rpm vala-devel-0.40.19-2.el8.x86_64.rpm valadoc-debuginfo-0.40.19-2.el8.i686.rpm valadoc-debuginfo-0.40.19-2.el8.x86_64.rpm woff2-debuginfo-1.0.2-5.el8.i686.rpm woff2-debuginfo-1.0.2-5.el8.x86_64.rpm woff2-debugsource-1.0.2-5.el8.i686.rpm woff2-debugsource-1.0.2-5.el8.x86_64.rpm woff2-devel-1.0.2-5.el8.i686.rpm woff2-devel-1.0.2-5.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2019-13012 https://access.redhat.com/security/cve/CVE-2020-9948 https://access.redhat.com/security/cve/CVE-2020-9951 https://access.redhat.com/security/cve/CVE-2020-9983 https://access.redhat.com/security/cve/CVE-2020-13543 https://access.redhat.com/security/cve/CVE-2020-13584 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2020-11-13-3 Additional information for APPLE-SA-2020-09-16-1 iOS 14.0 and iPadOS 14.0

iOS 14.0 and iPadOS 14.0 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT211850.

AppleAVD Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An application may be able to cause unexpected system termination or write kernel memory Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-9958: Mohamed Ghannam (@_simo36)

Assets Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An attacker may be able to misuse a trust relationship to download malicious content Description: A trust issue was addressed by removing a legacy API. CVE-2020-9979: CodeColorist of LightYear Security Lab of AntGroup Entry updated November 12, 2020

Audio Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious application may be able to read restricted memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-9943: JunDong Xie of Ant Group Light-Year Security Lab Entry added November 12, 2020

Audio Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An application may be able to read restricted memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-9944: JunDong Xie of Ant Group Light-Year Security Lab Entry added November 12, 2020

CoreAudio Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Playing a malicious audio file may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2020-9954: Francis working with Trend Micro Zero Day Initiative, JunDong Xie of Ant Group Light-Year Security Lab Entry added November 12, 2020

CoreCapture Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2020-9949: Proteas Entry added November 12, 2020

Disk Images Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-9965: Proteas CVE-2020-9966: Proteas Entry added November 12, 2020

Icons Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious application may be able to identify what other applications a user has installed Description: The issue was addressed with improved handling of icon caches. CVE-2020-9773: Chilik Tamir of Zimperium zLabs

IDE Device Support Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An attacker in a privileged network position may be able to execute arbitrary code on a paired device during a debug session over the network Description: This issue was addressed by encrypting communications over the network to devices running iOS 14, iPadOS 14, tvOS 14, and watchOS 7. CVE-2020-9992: Dany Lisiansky (@DanyL931), Nikias Bassen of Zimperium zLabs Entry updated September 17, 2020

ImageIO Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-9961: Xingwei Lin of Ant Security Light-Year Lab Entry added November 12, 2020

ImageIO Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-9876: Mickey Jin of Trend Micro Entry added November 12, 2020

IOSurfaceAccelerator Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A local user may be able to read kernel memory Description: A memory initialization issue was addressed with improved memory handling. CVE-2020-9964: Mohamed Ghannam (@_simo36), Tommy Muir (@Muirey03)

Kernel Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An attacker in a privileged network position may be able to inject into active connections within a VPN tunnel Description: A routing issue was addressed with improved restrictions. CVE-2019-14899: William J. Tolley, Beau Kujath, and Jedidiah R. Crandall Entry added November 12, 2020

Keyboard Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved state management. CVE-2020-9976: Rias A. Sherzad of JAIDE GmbH in Hamburg, Germany

libxml2 Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2020-9981: found by OSS-Fuzz Entry added November 12, 2020

Mail Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A remote attacker may be able to unexpectedly alter application state Description: This issue was addressed with improved checks. CVE-2020-9941: Fabian Ising of FH Münster University of Applied Sciences and Damian Poddebniak of FH Münster University of Applied Sciences Entry added November 12, 2020

Messages Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A local user may be able to discover a user’s deleted messages Description: The issue was addressed with improved deletion. CVE-2020-9988: William Breuer of the Netherlands CVE-2020-9989: von Brunn Media Entry added November 12, 2020

Model I/O Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-13520: Aleksandar Nikolic of Cisco Talos Entry added November 12, 2020

Model I/O Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2020-6147: Aleksandar Nikolic of Cisco Talos CVE-2020-9972: Aleksandar Nikolic of Cisco Talos Entry added November 12, 2020

Model I/O Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-9973: Aleksandar Nikolic of Cisco Talos

NetworkExtension Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious application may be able to elevate privileges Description: A use after free issue was addressed with improved memory management. CVE-2020-9996: Zhiwei Yuan of Trend Micro iCore Team, Junzhi Lu and Mickey Jin of Trend Micro Entry added November 12, 2020

Phone Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: The screen lock may not engage after the specified time period Description: This issue was addressed with improved checks. CVE-2020-9946: Daniel Larsson of iolight AB

Quick Look Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious app may be able to determine the existence of files on the computer Description: The issue was addressed with improved handling of icon caches. CVE-2020-9963: Csaba Fitzl (@theevilbit) of Offensive Security Entry added November 12, 2020

Safari Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious application may be able to determine a user's open tabs in Safari Description: A validation issue existed in the entitlement verification. CVE-2020-9977: Josh Parnham (@joshparnham) Entry added November 12, 2020

Safari Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Visiting a malicious website may lead to address bar spoofing Description: The issue was addressed with improved UI handling. CVE-2020-9993: Masato Sugiyama (@smasato) of University of Tsukuba, Piotr Duszynski Entry added November 12, 2020

Sandbox Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A local user may be able to view senstive user information Description: An access issue was addressed with additional sandbox restrictions. CVE-2020-9969: Wojciech Reguła of SecuRing (wojciechregula.blog) Entry added November 12, 2020

Sandbox Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A malicious application may be able to access restricted files Description: A logic issue was addressed with improved restrictions. CVE-2020-9968: Adam Chester (@xpn) of TrustedSec Entry updated September 17, 2020

Siri Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A person with physical access to an iOS device may be able to view notification contents from the lockscreen Description: A lock screen issue allowed access to messages on a locked device. CVE-2020-9959: an anonymous researcher, an anonymous researcher, an anonymous researcher, an anonymous researcher, an anonymous researcher, Andrew Goldberg The University of Texas at Austin, McCombs School of Business, Meli̇h Kerem Güneş of Li̇v College, Sinan Gulguler

SQLite Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2020-13434 CVE-2020-13435 CVE-2020-9991 Entry added November 12, 2020

SQLite Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A remote attacker may be able to leak memory Description: An information disclosure issue was addressed with improved state management. CVE-2020-9849 Entry added November 12, 2020

SQLite Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Multiple issues in SQLite Description: Multiple issues were addressed by updating SQLite to version 3.32.3. CVE-2020-15358 Entry added November 12, 2020

SQLite Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A maliciously crafted SQL query may lead to data corruption Description: This issue was addressed with improved checks. CVE-2020-13631 Entry added November 12, 2020

SQLite Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: A remote attacker may be able to cause arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2020-13630 Entry added November 12, 2020

WebKit Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2020-9947: cc working with Trend Micro Zero Day Initiative CVE-2020-9950: cc working with Trend Micro Zero Day Initiative CVE-2020-9951: Marcin 'Icewall' Noga of Cisco Talos Entry added November 12, 2020

WebKit Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing maliciously crafted web content may lead to code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-9983: zhunki Entry added November 12, 2020

WebKit Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: Processing maliciously crafted web content may lead to a cross site scripting attack Description: An input validation issue was addressed with improved input validation. CVE-2020-9952: Ryan Pickren (ryanpickren.com)

Wi-Fi Available for: iPhone 6s and later, iPod touch 7th generation, iPad Air 2 and later, and iPad mini 4 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved state management. CVE-2020-10013: Yu Wang of Didi Research America Entry added November 12, 2020

Additional recognition

App Store We would like to acknowledge Giyas Umarov of Holmdel High School for their assistance.

Audio We would like to acknowledge JunDong Xie and XingWei Lin of Ant- financial Light-Year Security Lab for their assistance. Entry added November 12, 2020

Bluetooth We would like to acknowledge Andy Davis of NCC Group and Dennis Heinze (@ttdennis) of TU Darmstadt, Secure Mobile Networking Lab for their assistance.

CallKit We would like to acknowledge Federico Zanetello for their assistance.

CarPlay We would like to acknowledge an anonymous researcher for their assistance.

Clang We would like to acknowledge Brandon Azad of Google Project Zero for their assistance. Entry added November 12, 2020

Core Location We would like to acknowledge Yiğit Can YILMAZ (@yilmazcanyigit) for their assistance.

debugserver We would like to acknowledge Linus Henze (pinauten.de) for their assistance.

iAP We would like to acknowledge Andy Davis of NCC Group for their assistance.

iBoot We would like to acknowledge Brandon Azad of Google Project Zero for their assistance.

Kernel We would like to acknowledge Brandon Azad of Google Project Zero, Stephen Röttger of Google for their assistance. Entry updated November 12, 2020

libarchive We would like to acknowledge Dzmitry Plotnikau and an anonymous researcher for their assistance.

lldb We would like to acknowledge Linus Henze (pinauten.de) for their assistance. Entry added November 12, 2020

Location Framework We would like to acknowledge Nicolas Brunner (linkedin.com/in/nicolas-brunner-651bb4128) for their assistance. Entry updated October 19, 2020

Mail We would like to acknowledge an anonymous researcher for their assistance. Entry added November 12, 2020

Mail Drafts We would like to acknowledge Jon Bottarini of HackerOne for their assistance. Entry added November 12, 2020

Maps We would like to acknowledge Matthew Dolan of Amazon Alexa for their assistance.

NetworkExtension We would like to acknowledge Thijs Alkemade of Computest and ‘Qubo Song’ of ‘Symantec, a division of Broadcom’ for their assistance.

Phone Keypad We would like to acknowledge Hasan Fahrettin Kaya of Akdeniz University, an anonymous researcher for their assistance. Entry updated November 12, 2020

Safari We would like to acknowledge Andreas Gutmann (@KryptoAndI) of OneSpan's Innovation Centre (onespan.com) and University College London, Steven J. Murdoch (@SJMurdoch) of OneSpan's Innovation Centre (onespan.com) and University College London, Jack Cable of Lightning Security, Ryan Pickren (ryanpickren.com), Yair Amit for their assistance. Entry added November 12, 2020

Safari Reader We would like to acknowledge Zhiyang Zeng(@Wester) of OPPO ZIWU Security Lab for their assistance. Entry added November 12, 2020

Security We would like to acknowledge Christian Starkjohann of Objective Development Software GmbH for their assistance. Entry added November 12, 2020

Status Bar We would like to acknowledge Abdul M. Majumder, Abdullah Fasihallah of Taif university, Adwait Vikas Bhide, Frederik Schmid, Nikita, and an anonymous researcher for their assistance.

Telephony We would like to acknowledge Onur Can Bıkmaz, Vodafone Turkey @canbkmaz, Yiğit Can YILMAZ (@yilmazcanyigit), an anonymous researcher for their assistance. Entry updated November 12, 2020

UIKit We would like to acknowledge Borja Marcos of Sarenet, Simon de Vegt, and Talal Haj Bakry (@hajbakri) and Tommy Mysk (@tommymysk) of Mysk Inc for their assistance.

Web App We would like to acknowledge Augusto Alvarez of Outcourse Limited for their assistance.

WebKit We would like to acknowledge Pawel Wylecial of REDTEAM.PL, Ryan Pickren (ryanpickren.com), Tsubasa FUJII (@reinforchu), Zhiyang Zeng(@Wester) of OPPO ZIWU Security Lab for their assistance. Entry added November 12, 2020

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device.

The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated:

  • Navigate to Settings
  • Select General
  • Select About. The version after applying this update will be "iOS 14.0 and iPadOS 14.0".

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAl+uxqgACgkQZcsbuWJ6 jjBhIhAAhLzDSjgjVzG0JLzEerhFBcAWQ1G8ogmIdxuC0aQfvxO4V1NriKzUcmsZ UgQCEdN4kzfLsj3KeuwSeq0pg2CX1eZdgY/FyuOBRzljsmGPXJgkyYapJww6mC8n 7jeJazKusiyaRmScLYDwvbOQGlaqCfu6HrM9umMpLfwPGjFqe/gz8jyxohdVZx9t pNC0g9l37dVJIvFRc1mAm9HAnIQoL8CDOEd96jVYiecB8xk0X6CwjZ7nGzYJc5LZ A54EaN0dDz+8q8jgylmAd8xkA8Pgdsxw+LWDr1TxPuu3XIzYa98S1AsItK2eiWx8 pIhrzVZ3fk1w3+W/cSWrgzUq4ouijWcWw9dmVgxmzv9ldL/pS+wIgFsYLJm4xHAp PH+9p3JmMQks9BWgr3h+NEcJwCUm5J7y0PNuCnQL2iKzn4jikqgfCXHZOidkPV3t KjeeIFX30AGI7cUqhRl9GbRn8l5SA4pbd4a0Y5df1PgkDjSXxw91Z1+5S15Qfrzs K8pBlPH37yU3aqMEvxBsN5Fd7vdFdA+pV/aWG5tw4pUlZJC25c50w1ZW0vrnsisg /isPJqXhUWiGAfQ7s5W6W3AMs4PyvRjY+7zzGiHAd+wNkUNwVTbXvKP4W4n/vGH8 uARpQRQsureymLerXpVTwH8ZoeDEeZZwaqNHTQKg/M9ifAZPZUA= =WdqR -----END PGP SIGNATURE-----

. Bugs fixed (https://bugzilla.redhat.com/):

1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1328 - Port fix to 5.0.z for BZ-1945168

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.13. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2021:2122

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

This update fixes the following bug among others:

  • Previously, resources for the ClusterOperator were being created early in the update process, which led to update failures when the ClusterOperator had no status condition while Operators were updating. This bug fix changes the timing of when these resources are created. As a result, updates can take place without errors. (BZ#1959238)

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-x86_64

The image digest is sha256:783a2c963f35ccab38e82e6a8c7fa954c3a4551e07d2f43c06098828dd986ed4

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-s390x

The image digest is sha256:4cf44e68413acad063203e1ee8982fd01d8b9c1f8643a5b31cd7ff341b3199cd

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.13-ppc64le

The image digest is sha256:d47ce972f87f14f1f3c5d50428d2255d1256dae3f45c938ace88547478643e36

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923268 - [Assisted-4.7] [Staging] Using two both spelling "canceled" "cancelled" 1947216 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1953963 - Enable/Disable host operations returns cluster resource with incomplete hosts list 1957749 - ovn-kubernetes pod should have CPU and memory requests set but not limits 1959238 - CVO creating cloud-controller-manager too early causing upgrade failures 1960103 - SR-IOV obliviously reboot the node 1961941 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1962302 - packageserver clusteroperator does not set reason or message for Available condition 1962312 - Deployment considered unhealthy despite being available and at latest generation 1962435 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1963115 - Test verify /run filesystem contents failing


  1. Gentoo Linux Security Advisory GLSA 202012-10

                                        https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: December 23, 2020 Bugs: #755947 ID: 202012-10


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which could result in the arbitrary execution of code.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

 -------------------------------------------------------------------
  Package              /     Vulnerable     /            Unaffected
 -------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.30.3 >= 2.30.3

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.30.3"

References

[ 1 ] CVE-2020-13543 https://nvd.nist.gov/vuln/detail/CVE-2020-13543 [ 2 ] CVE-2020-13584 https://nvd.nist.gov/vuln/detail/CVE-2020-13584 [ 3 ] CVE-2020-9948 https://nvd.nist.gov/vuln/detail/CVE-2020-9948 [ 4 ] CVE-2020-9951 https://nvd.nist.gov/vuln/detail/CVE-2020-9951 [ 5 ] CVE-2020-9952 https://nvd.nist.gov/vuln/detail/CVE-2020-9952 [ 6 ] CVE-2020-9983 https://nvd.nist.gov/vuln/detail/CVE-2020-9983 [ 7 ] WSA-2020-0008 https://webkitgtk.org/security/WSA-2020-0008.html [ 8 ] WSA-2020-0009 https://webkitgtk.org/security/WSA-2020-0009.html

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202012-10

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . CVE-2020-9983: zhunki

Installation note:

Safari 14.0 may be obtained from the Mac App Store.

Alternatively, on your watch, select "My Watch > General > About"

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202010-1511",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.5"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.9"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "webkitgtk\\+",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "webkit",
        "version": "2.30.3"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.0"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.0"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "162689"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "162877"
      }
    ],
    "trust": 0.4
  },
  "cve": "CVE-2020-9951",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-9951",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-188076",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2020-9951",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-9951",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-188076",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-9951",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A use after free issue was addressed with improved memory management. This issue is fixed in Safari 14.0. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari is a web browser of Apple (Apple), the default browser included with Mac OS X and iOS operating systems. A resource management error vulnerability exists in Apple Safari. The vulnerability originates from the aboutBlankURL() function of the WebKit component in Apple Safari. Bugs fixed (https://bugzilla.redhat.com/):\n\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1945703 - \"Guest OS Info\" availability in VMI describe is flaky\n1958816 - [2.6.z] KubeMacPool fails to start due to OOM likely caused by a high number of Pods running in the cluster\n1963275 - migration controller null pointer dereference\n1965099 - Live Migration double handoff to virt-handler causes connection failures\n1965181 - CDI importer doesn\u0027t report AwaitingVDDK like it used to\n1967086 - Cloning DataVolumes between namespaces fails while creating cdi-upload pod\n1967887 - [2.6.6] nmstate is not progressing on a node and not configuring vlan filtering that causes an outage for VMs\n1969756 - Windows VMs fail to start on air-gapped environments\n1970372 - Virt-handler fails to verify container-disk\n1973227 - segfault in virt-controller during pdb deletion\n1974084 - 2.6.6 containers\n1975212 - No Virtual Machine Templates Found [EDIT - all templates are marked as depracted]\n1975727 - [Regression][VMIO][Warm] The third precopy does not end in warm migration\n1977756 - [2.6.z] PVC keeps in pending when using hostpath-provisioner\n1982760 - [v2v] no kind VirtualMachine is registered for version \\\"kubevirt.io/v1\\\" i... \n1986989 - OpenShift Virtualization 2.6.z cannot be upgraded to 4.8.0 initially deployed starting with \u003c= 4.8\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: GNOME security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2021:1586-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:1586\nIssue date:        2021-05-18\nCVE Names:         CVE-2019-13012 CVE-2020-9948 CVE-2020-9951\n                   CVE-2020-9983 CVE-2020-13543 CVE-2020-13584\n====================================================================\n1. Summary:\n\nAn update for GNOME is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nGNOME is the default desktop environment of Red Hat Enterprise Linux. \n\nThe following packages have been upgraded to a later upstream version:\naccountsservice (0.6.55), webkit2gtk3 (2.30.4). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.4 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nGDM must be restarted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n837035 - Shortcuts -- alfanumeric vs numpad\n1152037 - RFE: use virtio-scsi disk bus with discard=\u0027unmap\u0027 for guests that support it\n1464902 - Crash in dls_async_task_complete\n1671761 - Adding new workspaces is broken in gnome session under wayland\n1700002 - adding several printers is stalling the printer plugin in GSD\n1705392 - Changing screen resolution while recording screen will break the video. \n1728632 - CVE-2019-13012 glib2: insecure permissions for files and directories\n1728896 - glib2: \u0027keyfile\u0027 backend for gsettings not loaded\n1765627 - Can\u0027t install both gnome-online-accounts-devel.i686 and gnome-online-accounts-devel.x86_64 on RHEL 8.1\n1786496 - gnome-shell killed by SIGABRT in g_assertion_message_expr.cold.16()\n1796916 - Notification appears with incorrect \"system not registered - register to get updates\" message on RHEL8.2 when locale is non-English\n1802105 - rpm based extensions in RHEL8 should not receive updates from extensions.gnome.org\n1833787 - Unable to disable onscreen keyboard in touch screen machine\n1842229 - double-touch desktop icons fails sometimes\n1845660 - JS WARNING from gnome-shell [MetaWindowX11]\n1846376 - rebase accountsservice to latest release\n1854290 - Physical device fails to wakeup via org.gnome.ScreenSaver D-Bus API\n1860946 - gnome-shell logs AuthList property defined with \u0027let\u0027 or \u0027const\u0027\n1861357 - Login shows Exclamation Sign with no message for Caps Lock on\n1861769 - Authentication fails when Wayland is enabled along with polyinstantiation of /tmp\n1865718 - Right click menu is not translated into Japanese when desktop-icons extension is enabled\n1870837 - gnome control-center and settings-daemon don\u0027t handle systems that are registered but have no attached entitlements properly\n1871041 - on screen keyboard (OSK) does not disappear completely, part of OSK remains on the screen\n1876291 - [ALL LANG] Unlocalized strings in About -\u003e Register System. \n1881312 - [Bug] gnome-shell errors in syslog\n1883304 - Rebase to WebKitGTK 2.30\n1883868 - [RFE] Dump JS stack trace by default when gnome-shell crashes\n1886822 - License differs from actual\n1888407 - Flatpak updates and removals get confused if same ref occurs in multiple remotes\n1889411 - self-signed cert in owncloud: HTTP Error: Unacceptable TLS certificate\n1889528 - [8.4] Right GLX stereo texture is potentially leaked for each closed window\n1901212 - CVE-2020-13584 webkitgtk: use-after-free may lead to arbitrary code execution\n1901214 - CVE-2020-9948 webkitgtk: type confusion may lead to arbitrary code execution\n1901216 - CVE-2020-9951 webkitgtk: use-after-free may lead to arbitrary code execution\n1901221 - CVE-2020-9983 webkitgtk: out-of-bounds write may lead to code execution\n1903043 - gnome-control-center SEGFAULT at ../panels/printers/pp-printer-entry.c:280\n1903568 - CVE-2020-13543 webkitgtk: use-after-free may lead to arbitrary code execution\n1906499 - Nautilus creates invalid bookmarks for Samba shares\n1918391 - gdm isn\u0027t killing the login screen on login after all\n1919429 - Ship libdazzle-devel in CRB\n1919432 - Ship libepubgen-devel in CRB\n1919435 - Ship woff2-devel in CRB\n1919467 - Mutter: mouse click doesn\u0027t work when using 10-bit graphic monitor\n1921151 - [nvidia Ampere] stutters  when using  nouveau with llvmpipe\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nOpenEXR-2.2.0-12.el8.src.rpm\naccountsservice-0.6.55-1.el8.src.rpm\natkmm-2.24.2-7.el8.src.rpm\ncairomm-1.12.0-8.el8.src.rpm\nchrome-gnome-shell-10.1-7.el8.src.rpm\ndleyna-core-0.6.0-3.el8.src.rpm\ndleyna-server-0.6.0-3.el8.src.rpm\nenchant2-2.2.3-3.el8.src.rpm\ngdm-3.28.3-39.el8.src.rpm\ngeoclue2-2.5.5-2.el8.src.rpm\ngeocode-glib-3.26.0-3.el8.src.rpm\ngjs-1.56.2-5.el8.src.rpm\nglibmm24-2.56.0-2.el8.src.rpm\ngnome-boxes-3.36.5-8.el8.src.rpm\ngnome-control-center-3.28.2-27.el8.src.rpm\ngnome-online-accounts-3.28.2-2.el8.src.rpm\ngnome-photos-3.28.1-4.el8.src.rpm\ngnome-settings-daemon-3.32.0-14.el8.src.rpm\ngnome-shell-3.32.2-30.el8.src.rpm\ngnome-shell-extensions-3.32.1-14.el8.src.rpm\ngnome-software-3.36.1-5.el8.src.rpm\ngnome-terminal-3.28.3-3.el8.src.rpm\ngtk2-2.24.32-5.el8.src.rpm\ngtkmm24-2.24.5-6.el8.src.rpm\ngtkmm30-3.22.2-3.el8.src.rpm\ngvfs-1.36.2-11.el8.src.rpm\nlibdazzle-3.28.5-2.el8.src.rpm\nlibepubgen-0.1.0-3.el8.src.rpm\nlibsigc++20-2.10.0-6.el8.src.rpm\nlibvisual-0.4.0-25.el8.src.rpm\nmutter-3.32.2-57.el8.src.rpm\nnautilus-3.28.1-15.el8.src.rpm\npangomm-2.40.1-6.el8.src.rpm\nsoundtouch-2.0.0-3.el8.src.rpm\nwebkit2gtk3-2.30.4-1.el8.src.rpm\nwoff2-1.0.2-5.el8.src.rpm\n\naarch64:\nOpenEXR-debuginfo-2.2.0-12.el8.aarch64.rpm\nOpenEXR-debugsource-2.2.0-12.el8.aarch64.rpm\nOpenEXR-libs-2.2.0-12.el8.aarch64.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.aarch64.rpm\naccountsservice-0.6.55-1.el8.aarch64.rpm\naccountsservice-debuginfo-0.6.55-1.el8.aarch64.rpm\naccountsservice-debugsource-0.6.55-1.el8.aarch64.rpm\naccountsservice-libs-0.6.55-1.el8.aarch64.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.aarch64.rpm\natkmm-2.24.2-7.el8.aarch64.rpm\natkmm-debuginfo-2.24.2-7.el8.aarch64.rpm\natkmm-debugsource-2.24.2-7.el8.aarch64.rpm\ncairomm-1.12.0-8.el8.aarch64.rpm\ncairomm-debuginfo-1.12.0-8.el8.aarch64.rpm\ncairomm-debugsource-1.12.0-8.el8.aarch64.rpm\nchrome-gnome-shell-10.1-7.el8.aarch64.rpm\nenchant2-2.2.3-3.el8.aarch64.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.aarch64.rpm\nenchant2-debuginfo-2.2.3-3.el8.aarch64.rpm\nenchant2-debugsource-2.2.3-3.el8.aarch64.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.aarch64.rpm\ngdm-3.28.3-39.el8.aarch64.rpm\ngdm-debuginfo-3.28.3-39.el8.aarch64.rpm\ngdm-debugsource-3.28.3-39.el8.aarch64.rpm\ngeoclue2-2.5.5-2.el8.aarch64.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.aarch64.rpm\ngeoclue2-debugsource-2.5.5-2.el8.aarch64.rpm\ngeoclue2-demos-2.5.5-2.el8.aarch64.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.aarch64.rpm\ngeoclue2-libs-2.5.5-2.el8.aarch64.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.aarch64.rpm\ngeocode-glib-3.26.0-3.el8.aarch64.rpm\ngeocode-glib-debuginfo-3.26.0-3.el8.aarch64.rpm\ngeocode-glib-debugsource-3.26.0-3.el8.aarch64.rpm\ngeocode-glib-devel-3.26.0-3.el8.aarch64.rpm\ngjs-1.56.2-5.el8.aarch64.rpm\ngjs-debuginfo-1.56.2-5.el8.aarch64.rpm\ngjs-debugsource-1.56.2-5.el8.aarch64.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.aarch64.rpm\nglibmm24-2.56.0-2.el8.aarch64.rpm\nglibmm24-debuginfo-2.56.0-2.el8.aarch64.rpm\nglibmm24-debugsource-2.56.0-2.el8.aarch64.rpm\ngnome-control-center-3.28.2-27.el8.aarch64.rpm\ngnome-control-center-debuginfo-3.28.2-27.el8.aarch64.rpm\ngnome-control-center-debugsource-3.28.2-27.el8.aarch64.rpm\ngnome-online-accounts-3.28.2-2.el8.aarch64.rpm\ngnome-online-accounts-debuginfo-3.28.2-2.el8.aarch64.rpm\ngnome-online-accounts-debugsource-3.28.2-2.el8.aarch64.rpm\ngnome-online-accounts-devel-3.28.2-2.el8.aarch64.rpm\ngnome-settings-daemon-3.32.0-14.el8.aarch64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-14.el8.aarch64.rpm\ngnome-settings-daemon-debugsource-3.32.0-14.el8.aarch64.rpm\ngnome-shell-3.32.2-30.el8.aarch64.rpm\ngnome-shell-debuginfo-3.32.2-30.el8.aarch64.rpm\ngnome-shell-debugsource-3.32.2-30.el8.aarch64.rpm\ngnome-software-3.36.1-5.el8.aarch64.rpm\ngnome-software-debuginfo-3.36.1-5.el8.aarch64.rpm\ngnome-software-debugsource-3.36.1-5.el8.aarch64.rpm\ngnome-terminal-3.28.3-3.el8.aarch64.rpm\ngnome-terminal-debuginfo-3.28.3-3.el8.aarch64.rpm\ngnome-terminal-debugsource-3.28.3-3.el8.aarch64.rpm\ngnome-terminal-nautilus-3.28.3-3.el8.aarch64.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-3.el8.aarch64.rpm\ngtk2-2.24.32-5.el8.aarch64.rpm\ngtk2-debuginfo-2.24.32-5.el8.aarch64.rpm\ngtk2-debugsource-2.24.32-5.el8.aarch64.rpm\ngtk2-devel-2.24.32-5.el8.aarch64.rpm\ngtk2-devel-debuginfo-2.24.32-5.el8.aarch64.rpm\ngtk2-devel-docs-2.24.32-5.el8.aarch64.rpm\ngtk2-immodule-xim-2.24.32-5.el8.aarch64.rpm\ngtk2-immodule-xim-debuginfo-2.24.32-5.el8.aarch64.rpm\ngtk2-immodules-2.24.32-5.el8.aarch64.rpm\ngtk2-immodules-debuginfo-2.24.32-5.el8.aarch64.rpm\ngtkmm24-2.24.5-6.el8.aarch64.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.aarch64.rpm\ngtkmm24-debugsource-2.24.5-6.el8.aarch64.rpm\ngtkmm30-3.22.2-3.el8.aarch64.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.aarch64.rpm\ngtkmm30-debugsource-3.22.2-3.el8.aarch64.rpm\ngvfs-1.36.2-11.el8.aarch64.rpm\ngvfs-afc-1.36.2-11.el8.aarch64.rpm\ngvfs-afc-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-afp-1.36.2-11.el8.aarch64.rpm\ngvfs-afp-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-archive-1.36.2-11.el8.aarch64.rpm\ngvfs-archive-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-client-1.36.2-11.el8.aarch64.rpm\ngvfs-client-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-debugsource-1.36.2-11.el8.aarch64.rpm\ngvfs-devel-1.36.2-11.el8.aarch64.rpm\ngvfs-fuse-1.36.2-11.el8.aarch64.rpm\ngvfs-fuse-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-goa-1.36.2-11.el8.aarch64.rpm\ngvfs-goa-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-gphoto2-1.36.2-11.el8.aarch64.rpm\ngvfs-gphoto2-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-mtp-1.36.2-11.el8.aarch64.rpm\ngvfs-mtp-debuginfo-1.36.2-11.el8.aarch64.rpm\ngvfs-smb-1.36.2-11.el8.aarch64.rpm\ngvfs-smb-debuginfo-1.36.2-11.el8.aarch64.rpm\nlibsigc++20-2.10.0-6.el8.aarch64.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.aarch64.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.aarch64.rpm\nlibvisual-0.4.0-25.el8.aarch64.rpm\nlibvisual-debuginfo-0.4.0-25.el8.aarch64.rpm\nlibvisual-debugsource-0.4.0-25.el8.aarch64.rpm\nmutter-3.32.2-57.el8.aarch64.rpm\nmutter-debuginfo-3.32.2-57.el8.aarch64.rpm\nmutter-debugsource-3.32.2-57.el8.aarch64.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.aarch64.rpm\nnautilus-3.28.1-15.el8.aarch64.rpm\nnautilus-debuginfo-3.28.1-15.el8.aarch64.rpm\nnautilus-debugsource-3.28.1-15.el8.aarch64.rpm\nnautilus-extensions-3.28.1-15.el8.aarch64.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.aarch64.rpm\npangomm-2.40.1-6.el8.aarch64.rpm\npangomm-debuginfo-2.40.1-6.el8.aarch64.rpm\npangomm-debugsource-2.40.1-6.el8.aarch64.rpm\nsoundtouch-2.0.0-3.el8.aarch64.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.aarch64.rpm\nsoundtouch-debugsource-2.0.0-3.el8.aarch64.rpm\nwebkit2gtk3-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.30.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.aarch64.rpm\nwoff2-1.0.2-5.el8.aarch64.rpm\nwoff2-debuginfo-1.0.2-5.el8.aarch64.rpm\nwoff2-debugsource-1.0.2-5.el8.aarch64.rpm\n\nnoarch:\ngnome-classic-session-3.32.1-14.el8.noarch.rpm\ngnome-control-center-filesystem-3.28.2-27.el8.noarch.rpm\ngnome-shell-extension-apps-menu-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-auto-move-windows-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-common-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-dash-to-dock-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-desktop-icons-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-disable-screenshield-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-drive-menu-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-horizontal-workspaces-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-launch-new-instance-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-native-window-placement-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-no-hot-corner-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-panel-favorites-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-places-menu-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-screenshot-window-sizer-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-systemMonitor-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-top-icons-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-updates-dialog-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-user-theme-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-window-grouper-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-window-list-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-windowsNavigator-3.32.1-14.el8.noarch.rpm\ngnome-shell-extension-workspace-indicator-3.32.1-14.el8.noarch.rpm\n\nppc64le:\nOpenEXR-debuginfo-2.2.0-12.el8.ppc64le.rpm\nOpenEXR-debugsource-2.2.0-12.el8.ppc64le.rpm\nOpenEXR-libs-2.2.0-12.el8.ppc64le.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.ppc64le.rpm\naccountsservice-0.6.55-1.el8.ppc64le.rpm\naccountsservice-debuginfo-0.6.55-1.el8.ppc64le.rpm\naccountsservice-debugsource-0.6.55-1.el8.ppc64le.rpm\naccountsservice-libs-0.6.55-1.el8.ppc64le.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.ppc64le.rpm\natkmm-2.24.2-7.el8.ppc64le.rpm\natkmm-debuginfo-2.24.2-7.el8.ppc64le.rpm\natkmm-debugsource-2.24.2-7.el8.ppc64le.rpm\ncairomm-1.12.0-8.el8.ppc64le.rpm\ncairomm-debuginfo-1.12.0-8.el8.ppc64le.rpm\ncairomm-debugsource-1.12.0-8.el8.ppc64le.rpm\nchrome-gnome-shell-10.1-7.el8.ppc64le.rpm\ndleyna-core-0.6.0-3.el8.ppc64le.rpm\ndleyna-core-debuginfo-0.6.0-3.el8.ppc64le.rpm\ndleyna-core-debugsource-0.6.0-3.el8.ppc64le.rpm\ndleyna-server-0.6.0-3.el8.ppc64le.rpm\ndleyna-server-debuginfo-0.6.0-3.el8.ppc64le.rpm\ndleyna-server-debugsource-0.6.0-3.el8.ppc64le.rpm\nenchant2-2.2.3-3.el8.ppc64le.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.ppc64le.rpm\nenchant2-debuginfo-2.2.3-3.el8.ppc64le.rpm\nenchant2-debugsource-2.2.3-3.el8.ppc64le.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.ppc64le.rpm\ngdm-3.28.3-39.el8.ppc64le.rpm\ngdm-debuginfo-3.28.3-39.el8.ppc64le.rpm\ngdm-debugsource-3.28.3-39.el8.ppc64le.rpm\ngeoclue2-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-debugsource-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-demos-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-libs-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.ppc64le.rpm\ngeocode-glib-3.26.0-3.el8.ppc64le.rpm\ngeocode-glib-debuginfo-3.26.0-3.el8.ppc64le.rpm\ngeocode-glib-debugsource-3.26.0-3.el8.ppc64le.rpm\ngeocode-glib-devel-3.26.0-3.el8.ppc64le.rpm\ngjs-1.56.2-5.el8.ppc64le.rpm\ngjs-debuginfo-1.56.2-5.el8.ppc64le.rpm\ngjs-debugsource-1.56.2-5.el8.ppc64le.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.ppc64le.rpm\nglibmm24-2.56.0-2.el8.ppc64le.rpm\nglibmm24-debuginfo-2.56.0-2.el8.ppc64le.rpm\nglibmm24-debugsource-2.56.0-2.el8.ppc64le.rpm\ngnome-control-center-3.28.2-27.el8.ppc64le.rpm\ngnome-control-center-debuginfo-3.28.2-27.el8.ppc64le.rpm\ngnome-control-center-debugsource-3.28.2-27.el8.ppc64le.rpm\ngnome-online-accounts-3.28.2-2.el8.ppc64le.rpm\ngnome-online-accounts-debuginfo-3.28.2-2.el8.ppc64le.rpm\ngnome-online-accounts-debugsource-3.28.2-2.el8.ppc64le.rpm\ngnome-online-accounts-devel-3.28.2-2.el8.ppc64le.rpm\ngnome-photos-3.28.1-4.el8.ppc64le.rpm\ngnome-photos-debuginfo-3.28.1-4.el8.ppc64le.rpm\ngnome-photos-debugsource-3.28.1-4.el8.ppc64le.rpm\ngnome-photos-tests-3.28.1-4.el8.ppc64le.rpm\ngnome-settings-daemon-3.32.0-14.el8.ppc64le.rpm\ngnome-settings-daemon-debuginfo-3.32.0-14.el8.ppc64le.rpm\ngnome-settings-daemon-debugsource-3.32.0-14.el8.ppc64le.rpm\ngnome-shell-3.32.2-30.el8.ppc64le.rpm\ngnome-shell-debuginfo-3.32.2-30.el8.ppc64le.rpm\ngnome-shell-debugsource-3.32.2-30.el8.ppc64le.rpm\ngnome-software-3.36.1-5.el8.ppc64le.rpm\ngnome-software-debuginfo-3.36.1-5.el8.ppc64le.rpm\ngnome-software-debugsource-3.36.1-5.el8.ppc64le.rpm\ngnome-terminal-3.28.3-3.el8.ppc64le.rpm\ngnome-terminal-debuginfo-3.28.3-3.el8.ppc64le.rpm\ngnome-terminal-debugsource-3.28.3-3.el8.ppc64le.rpm\ngnome-terminal-nautilus-3.28.3-3.el8.ppc64le.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-3.el8.ppc64le.rpm\ngtk2-2.24.32-5.el8.ppc64le.rpm\ngtk2-debuginfo-2.24.32-5.el8.ppc64le.rpm\ngtk2-debugsource-2.24.32-5.el8.ppc64le.rpm\ngtk2-devel-2.24.32-5.el8.ppc64le.rpm\ngtk2-devel-debuginfo-2.24.32-5.el8.ppc64le.rpm\ngtk2-devel-docs-2.24.32-5.el8.ppc64le.rpm\ngtk2-immodule-xim-2.24.32-5.el8.ppc64le.rpm\ngtk2-immodule-xim-debuginfo-2.24.32-5.el8.ppc64le.rpm\ngtk2-immodules-2.24.32-5.el8.ppc64le.rpm\ngtk2-immodules-debuginfo-2.24.32-5.el8.ppc64le.rpm\ngtkmm24-2.24.5-6.el8.ppc64le.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.ppc64le.rpm\ngtkmm24-debugsource-2.24.5-6.el8.ppc64le.rpm\ngtkmm30-3.22.2-3.el8.ppc64le.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.ppc64le.rpm\ngtkmm30-debugsource-3.22.2-3.el8.ppc64le.rpm\ngvfs-1.36.2-11.el8.ppc64le.rpm\ngvfs-afc-1.36.2-11.el8.ppc64le.rpm\ngvfs-afc-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-afp-1.36.2-11.el8.ppc64le.rpm\ngvfs-afp-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-archive-1.36.2-11.el8.ppc64le.rpm\ngvfs-archive-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-client-1.36.2-11.el8.ppc64le.rpm\ngvfs-client-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-debugsource-1.36.2-11.el8.ppc64le.rpm\ngvfs-devel-1.36.2-11.el8.ppc64le.rpm\ngvfs-fuse-1.36.2-11.el8.ppc64le.rpm\ngvfs-fuse-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-goa-1.36.2-11.el8.ppc64le.rpm\ngvfs-goa-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-gphoto2-1.36.2-11.el8.ppc64le.rpm\ngvfs-gphoto2-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-mtp-1.36.2-11.el8.ppc64le.rpm\ngvfs-mtp-debuginfo-1.36.2-11.el8.ppc64le.rpm\ngvfs-smb-1.36.2-11.el8.ppc64le.rpm\ngvfs-smb-debuginfo-1.36.2-11.el8.ppc64le.rpm\nlibdazzle-3.28.5-2.el8.ppc64le.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.ppc64le.rpm\nlibdazzle-debugsource-3.28.5-2.el8.ppc64le.rpm\nlibepubgen-0.1.0-3.el8.ppc64le.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.ppc64le.rpm\nlibepubgen-debugsource-0.1.0-3.el8.ppc64le.rpm\nlibsigc++20-2.10.0-6.el8.ppc64le.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.ppc64le.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.ppc64le.rpm\nlibvisual-0.4.0-25.el8.ppc64le.rpm\nlibvisual-debuginfo-0.4.0-25.el8.ppc64le.rpm\nlibvisual-debugsource-0.4.0-25.el8.ppc64le.rpm\nmutter-3.32.2-57.el8.ppc64le.rpm\nmutter-debuginfo-3.32.2-57.el8.ppc64le.rpm\nmutter-debugsource-3.32.2-57.el8.ppc64le.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.ppc64le.rpm\nnautilus-3.28.1-15.el8.ppc64le.rpm\nnautilus-debuginfo-3.28.1-15.el8.ppc64le.rpm\nnautilus-debugsource-3.28.1-15.el8.ppc64le.rpm\nnautilus-extensions-3.28.1-15.el8.ppc64le.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.ppc64le.rpm\npangomm-2.40.1-6.el8.ppc64le.rpm\npangomm-debuginfo-2.40.1-6.el8.ppc64le.rpm\npangomm-debugsource-2.40.1-6.el8.ppc64le.rpm\nsoundtouch-2.0.0-3.el8.ppc64le.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.ppc64le.rpm\nsoundtouch-debugsource-2.0.0-3.el8.ppc64le.rpm\nwebkit2gtk3-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.30.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.ppc64le.rpm\nwoff2-1.0.2-5.el8.ppc64le.rpm\nwoff2-debuginfo-1.0.2-5.el8.ppc64le.rpm\nwoff2-debugsource-1.0.2-5.el8.ppc64le.rpm\n\ns390x:\nOpenEXR-debuginfo-2.2.0-12.el8.s390x.rpm\nOpenEXR-debugsource-2.2.0-12.el8.s390x.rpm\nOpenEXR-libs-2.2.0-12.el8.s390x.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.s390x.rpm\naccountsservice-0.6.55-1.el8.s390x.rpm\naccountsservice-debuginfo-0.6.55-1.el8.s390x.rpm\naccountsservice-debugsource-0.6.55-1.el8.s390x.rpm\naccountsservice-libs-0.6.55-1.el8.s390x.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.s390x.rpm\natkmm-2.24.2-7.el8.s390x.rpm\natkmm-debuginfo-2.24.2-7.el8.s390x.rpm\natkmm-debugsource-2.24.2-7.el8.s390x.rpm\ncairomm-1.12.0-8.el8.s390x.rpm\ncairomm-debuginfo-1.12.0-8.el8.s390x.rpm\ncairomm-debugsource-1.12.0-8.el8.s390x.rpm\nchrome-gnome-shell-10.1-7.el8.s390x.rpm\nenchant2-2.2.3-3.el8.s390x.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.s390x.rpm\nenchant2-debuginfo-2.2.3-3.el8.s390x.rpm\nenchant2-debugsource-2.2.3-3.el8.s390x.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.s390x.rpm\ngdm-3.28.3-39.el8.s390x.rpm\ngdm-debuginfo-3.28.3-39.el8.s390x.rpm\ngdm-debugsource-3.28.3-39.el8.s390x.rpm\ngeoclue2-2.5.5-2.el8.s390x.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.s390x.rpm\ngeoclue2-debugsource-2.5.5-2.el8.s390x.rpm\ngeoclue2-demos-2.5.5-2.el8.s390x.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.s390x.rpm\ngeoclue2-libs-2.5.5-2.el8.s390x.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.s390x.rpm\ngeocode-glib-3.26.0-3.el8.s390x.rpm\ngeocode-glib-debuginfo-3.26.0-3.el8.s390x.rpm\ngeocode-glib-debugsource-3.26.0-3.el8.s390x.rpm\ngeocode-glib-devel-3.26.0-3.el8.s390x.rpm\ngjs-1.56.2-5.el8.s390x.rpm\ngjs-debuginfo-1.56.2-5.el8.s390x.rpm\ngjs-debugsource-1.56.2-5.el8.s390x.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.s390x.rpm\nglibmm24-2.56.0-2.el8.s390x.rpm\nglibmm24-debuginfo-2.56.0-2.el8.s390x.rpm\nglibmm24-debugsource-2.56.0-2.el8.s390x.rpm\ngnome-control-center-3.28.2-27.el8.s390x.rpm\ngnome-control-center-debuginfo-3.28.2-27.el8.s390x.rpm\ngnome-control-center-debugsource-3.28.2-27.el8.s390x.rpm\ngnome-online-accounts-3.28.2-2.el8.s390x.rpm\ngnome-online-accounts-debuginfo-3.28.2-2.el8.s390x.rpm\ngnome-online-accounts-debugsource-3.28.2-2.el8.s390x.rpm\ngnome-online-accounts-devel-3.28.2-2.el8.s390x.rpm\ngnome-settings-daemon-3.32.0-14.el8.s390x.rpm\ngnome-settings-daemon-debuginfo-3.32.0-14.el8.s390x.rpm\ngnome-settings-daemon-debugsource-3.32.0-14.el8.s390x.rpm\ngnome-shell-3.32.2-30.el8.s390x.rpm\ngnome-shell-debuginfo-3.32.2-30.el8.s390x.rpm\ngnome-shell-debugsource-3.32.2-30.el8.s390x.rpm\ngnome-software-3.36.1-5.el8.s390x.rpm\ngnome-software-debuginfo-3.36.1-5.el8.s390x.rpm\ngnome-software-debugsource-3.36.1-5.el8.s390x.rpm\ngnome-terminal-3.28.3-3.el8.s390x.rpm\ngnome-terminal-debuginfo-3.28.3-3.el8.s390x.rpm\ngnome-terminal-debugsource-3.28.3-3.el8.s390x.rpm\ngnome-terminal-nautilus-3.28.3-3.el8.s390x.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-3.el8.s390x.rpm\ngtk2-2.24.32-5.el8.s390x.rpm\ngtk2-debuginfo-2.24.32-5.el8.s390x.rpm\ngtk2-debugsource-2.24.32-5.el8.s390x.rpm\ngtk2-devel-2.24.32-5.el8.s390x.rpm\ngtk2-devel-debuginfo-2.24.32-5.el8.s390x.rpm\ngtk2-devel-docs-2.24.32-5.el8.s390x.rpm\ngtk2-immodule-xim-2.24.32-5.el8.s390x.rpm\ngtk2-immodule-xim-debuginfo-2.24.32-5.el8.s390x.rpm\ngtk2-immodules-2.24.32-5.el8.s390x.rpm\ngtk2-immodules-debuginfo-2.24.32-5.el8.s390x.rpm\ngtkmm24-2.24.5-6.el8.s390x.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.s390x.rpm\ngtkmm24-debugsource-2.24.5-6.el8.s390x.rpm\ngtkmm30-3.22.2-3.el8.s390x.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.s390x.rpm\ngtkmm30-debugsource-3.22.2-3.el8.s390x.rpm\ngvfs-1.36.2-11.el8.s390x.rpm\ngvfs-afp-1.36.2-11.el8.s390x.rpm\ngvfs-afp-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-archive-1.36.2-11.el8.s390x.rpm\ngvfs-archive-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-client-1.36.2-11.el8.s390x.rpm\ngvfs-client-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-debugsource-1.36.2-11.el8.s390x.rpm\ngvfs-devel-1.36.2-11.el8.s390x.rpm\ngvfs-fuse-1.36.2-11.el8.s390x.rpm\ngvfs-fuse-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-goa-1.36.2-11.el8.s390x.rpm\ngvfs-goa-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-gphoto2-1.36.2-11.el8.s390x.rpm\ngvfs-gphoto2-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-mtp-1.36.2-11.el8.s390x.rpm\ngvfs-mtp-debuginfo-1.36.2-11.el8.s390x.rpm\ngvfs-smb-1.36.2-11.el8.s390x.rpm\ngvfs-smb-debuginfo-1.36.2-11.el8.s390x.rpm\nlibsigc++20-2.10.0-6.el8.s390x.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.s390x.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.s390x.rpm\nlibvisual-0.4.0-25.el8.s390x.rpm\nlibvisual-debuginfo-0.4.0-25.el8.s390x.rpm\nlibvisual-debugsource-0.4.0-25.el8.s390x.rpm\nmutter-3.32.2-57.el8.s390x.rpm\nmutter-debuginfo-3.32.2-57.el8.s390x.rpm\nmutter-debugsource-3.32.2-57.el8.s390x.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.s390x.rpm\nnautilus-3.28.1-15.el8.s390x.rpm\nnautilus-debuginfo-3.28.1-15.el8.s390x.rpm\nnautilus-debugsource-3.28.1-15.el8.s390x.rpm\nnautilus-extensions-3.28.1-15.el8.s390x.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.s390x.rpm\npangomm-2.40.1-6.el8.s390x.rpm\npangomm-debuginfo-2.40.1-6.el8.s390x.rpm\npangomm-debugsource-2.40.1-6.el8.s390x.rpm\nsoundtouch-2.0.0-3.el8.s390x.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.s390x.rpm\nsoundtouch-debugsource-2.0.0-3.el8.s390x.rpm\nwebkit2gtk3-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.30.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.s390x.rpm\nwoff2-1.0.2-5.el8.s390x.rpm\nwoff2-debuginfo-1.0.2-5.el8.s390x.rpm\nwoff2-debugsource-1.0.2-5.el8.s390x.rpm\n\nx86_64:\nOpenEXR-debuginfo-2.2.0-12.el8.i686.rpm\nOpenEXR-debuginfo-2.2.0-12.el8.x86_64.rpm\nOpenEXR-debugsource-2.2.0-12.el8.i686.rpm\nOpenEXR-debugsource-2.2.0-12.el8.x86_64.rpm\nOpenEXR-libs-2.2.0-12.el8.i686.rpm\nOpenEXR-libs-2.2.0-12.el8.x86_64.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.i686.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.x86_64.rpm\naccountsservice-0.6.55-1.el8.x86_64.rpm\naccountsservice-debuginfo-0.6.55-1.el8.i686.rpm\naccountsservice-debuginfo-0.6.55-1.el8.x86_64.rpm\naccountsservice-debugsource-0.6.55-1.el8.i686.rpm\naccountsservice-debugsource-0.6.55-1.el8.x86_64.rpm\naccountsservice-libs-0.6.55-1.el8.i686.rpm\naccountsservice-libs-0.6.55-1.el8.x86_64.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.i686.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.x86_64.rpm\natkmm-2.24.2-7.el8.i686.rpm\natkmm-2.24.2-7.el8.x86_64.rpm\natkmm-debuginfo-2.24.2-7.el8.i686.rpm\natkmm-debuginfo-2.24.2-7.el8.x86_64.rpm\natkmm-debugsource-2.24.2-7.el8.i686.rpm\natkmm-debugsource-2.24.2-7.el8.x86_64.rpm\ncairomm-1.12.0-8.el8.i686.rpm\ncairomm-1.12.0-8.el8.x86_64.rpm\ncairomm-debuginfo-1.12.0-8.el8.i686.rpm\ncairomm-debuginfo-1.12.0-8.el8.x86_64.rpm\ncairomm-debugsource-1.12.0-8.el8.i686.rpm\ncairomm-debugsource-1.12.0-8.el8.x86_64.rpm\nchrome-gnome-shell-10.1-7.el8.x86_64.rpm\ndleyna-core-0.6.0-3.el8.i686.rpm\ndleyna-core-0.6.0-3.el8.x86_64.rpm\ndleyna-core-debuginfo-0.6.0-3.el8.i686.rpm\ndleyna-core-debuginfo-0.6.0-3.el8.x86_64.rpm\ndleyna-core-debugsource-0.6.0-3.el8.i686.rpm\ndleyna-core-debugsource-0.6.0-3.el8.x86_64.rpm\ndleyna-server-0.6.0-3.el8.x86_64.rpm\ndleyna-server-debuginfo-0.6.0-3.el8.x86_64.rpm\ndleyna-server-debugsource-0.6.0-3.el8.x86_64.rpm\nenchant2-2.2.3-3.el8.i686.rpm\nenchant2-2.2.3-3.el8.x86_64.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.i686.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.x86_64.rpm\nenchant2-debuginfo-2.2.3-3.el8.i686.rpm\nenchant2-debuginfo-2.2.3-3.el8.x86_64.rpm\nenchant2-debugsource-2.2.3-3.el8.i686.rpm\nenchant2-debugsource-2.2.3-3.el8.x86_64.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.i686.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.x86_64.rpm\ngdm-3.28.3-39.el8.i686.rpm\ngdm-3.28.3-39.el8.x86_64.rpm\ngdm-debuginfo-3.28.3-39.el8.i686.rpm\ngdm-debuginfo-3.28.3-39.el8.x86_64.rpm\ngdm-debugsource-3.28.3-39.el8.i686.rpm\ngdm-debugsource-3.28.3-39.el8.x86_64.rpm\ngeoclue2-2.5.5-2.el8.i686.rpm\ngeoclue2-2.5.5-2.el8.x86_64.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.i686.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.x86_64.rpm\ngeoclue2-debugsource-2.5.5-2.el8.i686.rpm\ngeoclue2-debugsource-2.5.5-2.el8.x86_64.rpm\ngeoclue2-demos-2.5.5-2.el8.x86_64.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.i686.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.x86_64.rpm\ngeoclue2-libs-2.5.5-2.el8.i686.rpm\ngeoclue2-libs-2.5.5-2.el8.x86_64.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.i686.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.x86_64.rpm\ngeocode-glib-3.26.0-3.el8.i686.rpm\ngeocode-glib-3.26.0-3.el8.x86_64.rpm\ngeocode-glib-debuginfo-3.26.0-3.el8.i686.rpm\ngeocode-glib-debuginfo-3.26.0-3.el8.x86_64.rpm\ngeocode-glib-debugsource-3.26.0-3.el8.i686.rpm\ngeocode-glib-debugsource-3.26.0-3.el8.x86_64.rpm\ngeocode-glib-devel-3.26.0-3.el8.i686.rpm\ngeocode-glib-devel-3.26.0-3.el8.x86_64.rpm\ngjs-1.56.2-5.el8.i686.rpm\ngjs-1.56.2-5.el8.x86_64.rpm\ngjs-debuginfo-1.56.2-5.el8.i686.rpm\ngjs-debuginfo-1.56.2-5.el8.x86_64.rpm\ngjs-debugsource-1.56.2-5.el8.i686.rpm\ngjs-debugsource-1.56.2-5.el8.x86_64.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.i686.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.x86_64.rpm\nglibmm24-2.56.0-2.el8.i686.rpm\nglibmm24-2.56.0-2.el8.x86_64.rpm\nglibmm24-debuginfo-2.56.0-2.el8.i686.rpm\nglibmm24-debuginfo-2.56.0-2.el8.x86_64.rpm\nglibmm24-debugsource-2.56.0-2.el8.i686.rpm\nglibmm24-debugsource-2.56.0-2.el8.x86_64.rpm\ngnome-boxes-3.36.5-8.el8.x86_64.rpm\ngnome-boxes-debuginfo-3.36.5-8.el8.x86_64.rpm\ngnome-boxes-debugsource-3.36.5-8.el8.x86_64.rpm\ngnome-control-center-3.28.2-27.el8.x86_64.rpm\ngnome-control-center-debuginfo-3.28.2-27.el8.x86_64.rpm\ngnome-control-center-debugsource-3.28.2-27.el8.x86_64.rpm\ngnome-online-accounts-3.28.2-2.el8.i686.rpm\ngnome-online-accounts-3.28.2-2.el8.x86_64.rpm\ngnome-online-accounts-debuginfo-3.28.2-2.el8.i686.rpm\ngnome-online-accounts-debuginfo-3.28.2-2.el8.x86_64.rpm\ngnome-online-accounts-debugsource-3.28.2-2.el8.i686.rpm\ngnome-online-accounts-debugsource-3.28.2-2.el8.x86_64.rpm\ngnome-online-accounts-devel-3.28.2-2.el8.i686.rpm\ngnome-online-accounts-devel-3.28.2-2.el8.x86_64.rpm\ngnome-photos-3.28.1-4.el8.x86_64.rpm\ngnome-photos-debuginfo-3.28.1-4.el8.x86_64.rpm\ngnome-photos-debugsource-3.28.1-4.el8.x86_64.rpm\ngnome-photos-tests-3.28.1-4.el8.x86_64.rpm\ngnome-settings-daemon-3.32.0-14.el8.x86_64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-14.el8.x86_64.rpm\ngnome-settings-daemon-debugsource-3.32.0-14.el8.x86_64.rpm\ngnome-shell-3.32.2-30.el8.x86_64.rpm\ngnome-shell-debuginfo-3.32.2-30.el8.x86_64.rpm\ngnome-shell-debugsource-3.32.2-30.el8.x86_64.rpm\ngnome-software-3.36.1-5.el8.x86_64.rpm\ngnome-software-debuginfo-3.36.1-5.el8.x86_64.rpm\ngnome-software-debugsource-3.36.1-5.el8.x86_64.rpm\ngnome-terminal-3.28.3-3.el8.x86_64.rpm\ngnome-terminal-debuginfo-3.28.3-3.el8.x86_64.rpm\ngnome-terminal-debugsource-3.28.3-3.el8.x86_64.rpm\ngnome-terminal-nautilus-3.28.3-3.el8.x86_64.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-3.el8.x86_64.rpm\ngtk2-2.24.32-5.el8.i686.rpm\ngtk2-2.24.32-5.el8.x86_64.rpm\ngtk2-debuginfo-2.24.32-5.el8.i686.rpm\ngtk2-debuginfo-2.24.32-5.el8.x86_64.rpm\ngtk2-debugsource-2.24.32-5.el8.i686.rpm\ngtk2-debugsource-2.24.32-5.el8.x86_64.rpm\ngtk2-devel-2.24.32-5.el8.i686.rpm\ngtk2-devel-2.24.32-5.el8.x86_64.rpm\ngtk2-devel-debuginfo-2.24.32-5.el8.i686.rpm\ngtk2-devel-debuginfo-2.24.32-5.el8.x86_64.rpm\ngtk2-devel-docs-2.24.32-5.el8.x86_64.rpm\ngtk2-immodule-xim-2.24.32-5.el8.i686.rpm\ngtk2-immodule-xim-2.24.32-5.el8.x86_64.rpm\ngtk2-immodule-xim-debuginfo-2.24.32-5.el8.i686.rpm\ngtk2-immodule-xim-debuginfo-2.24.32-5.el8.x86_64.rpm\ngtk2-immodules-2.24.32-5.el8.i686.rpm\ngtk2-immodules-2.24.32-5.el8.x86_64.rpm\ngtk2-immodules-debuginfo-2.24.32-5.el8.i686.rpm\ngtk2-immodules-debuginfo-2.24.32-5.el8.x86_64.rpm\ngtkmm24-2.24.5-6.el8.i686.rpm\ngtkmm24-2.24.5-6.el8.x86_64.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.i686.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.x86_64.rpm\ngtkmm24-debugsource-2.24.5-6.el8.i686.rpm\ngtkmm24-debugsource-2.24.5-6.el8.x86_64.rpm\ngtkmm30-3.22.2-3.el8.i686.rpm\ngtkmm30-3.22.2-3.el8.x86_64.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.i686.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.x86_64.rpm\ngtkmm30-debugsource-3.22.2-3.el8.i686.rpm\ngtkmm30-debugsource-3.22.2-3.el8.x86_64.rpm\ngvfs-1.36.2-11.el8.x86_64.rpm\ngvfs-afc-1.36.2-11.el8.x86_64.rpm\ngvfs-afc-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-afc-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-afp-1.36.2-11.el8.x86_64.rpm\ngvfs-afp-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-afp-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-archive-1.36.2-11.el8.x86_64.rpm\ngvfs-archive-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-archive-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-client-1.36.2-11.el8.i686.rpm\ngvfs-client-1.36.2-11.el8.x86_64.rpm\ngvfs-client-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-client-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-debugsource-1.36.2-11.el8.i686.rpm\ngvfs-debugsource-1.36.2-11.el8.x86_64.rpm\ngvfs-devel-1.36.2-11.el8.i686.rpm\ngvfs-devel-1.36.2-11.el8.x86_64.rpm\ngvfs-fuse-1.36.2-11.el8.x86_64.rpm\ngvfs-fuse-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-fuse-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-goa-1.36.2-11.el8.x86_64.rpm\ngvfs-goa-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-goa-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-gphoto2-1.36.2-11.el8.x86_64.rpm\ngvfs-gphoto2-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-gphoto2-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-mtp-1.36.2-11.el8.x86_64.rpm\ngvfs-mtp-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-mtp-debuginfo-1.36.2-11.el8.x86_64.rpm\ngvfs-smb-1.36.2-11.el8.x86_64.rpm\ngvfs-smb-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-smb-debuginfo-1.36.2-11.el8.x86_64.rpm\nlibdazzle-3.28.5-2.el8.i686.rpm\nlibdazzle-3.28.5-2.el8.x86_64.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.i686.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.x86_64.rpm\nlibdazzle-debugsource-3.28.5-2.el8.i686.rpm\nlibdazzle-debugsource-3.28.5-2.el8.x86_64.rpm\nlibepubgen-0.1.0-3.el8.i686.rpm\nlibepubgen-0.1.0-3.el8.x86_64.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.i686.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.x86_64.rpm\nlibepubgen-debugsource-0.1.0-3.el8.i686.rpm\nlibepubgen-debugsource-0.1.0-3.el8.x86_64.rpm\nlibsigc++20-2.10.0-6.el8.i686.rpm\nlibsigc++20-2.10.0-6.el8.x86_64.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.i686.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.x86_64.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.i686.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.x86_64.rpm\nlibvisual-0.4.0-25.el8.i686.rpm\nlibvisual-0.4.0-25.el8.x86_64.rpm\nlibvisual-debuginfo-0.4.0-25.el8.i686.rpm\nlibvisual-debuginfo-0.4.0-25.el8.x86_64.rpm\nlibvisual-debugsource-0.4.0-25.el8.i686.rpm\nlibvisual-debugsource-0.4.0-25.el8.x86_64.rpm\nmutter-3.32.2-57.el8.i686.rpm\nmutter-3.32.2-57.el8.x86_64.rpm\nmutter-debuginfo-3.32.2-57.el8.i686.rpm\nmutter-debuginfo-3.32.2-57.el8.x86_64.rpm\nmutter-debugsource-3.32.2-57.el8.i686.rpm\nmutter-debugsource-3.32.2-57.el8.x86_64.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.i686.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.x86_64.rpm\nnautilus-3.28.1-15.el8.x86_64.rpm\nnautilus-debuginfo-3.28.1-15.el8.i686.rpm\nnautilus-debuginfo-3.28.1-15.el8.x86_64.rpm\nnautilus-debugsource-3.28.1-15.el8.i686.rpm\nnautilus-debugsource-3.28.1-15.el8.x86_64.rpm\nnautilus-extensions-3.28.1-15.el8.i686.rpm\nnautilus-extensions-3.28.1-15.el8.x86_64.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.i686.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.x86_64.rpm\npangomm-2.40.1-6.el8.i686.rpm\npangomm-2.40.1-6.el8.x86_64.rpm\npangomm-debuginfo-2.40.1-6.el8.i686.rpm\npangomm-debuginfo-2.40.1-6.el8.x86_64.rpm\npangomm-debugsource-2.40.1-6.el8.i686.rpm\npangomm-debugsource-2.40.1-6.el8.x86_64.rpm\nsoundtouch-2.0.0-3.el8.i686.rpm\nsoundtouch-2.0.0-3.el8.x86_64.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.i686.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.x86_64.rpm\nsoundtouch-debugsource-2.0.0-3.el8.i686.rpm\nsoundtouch-debugsource-2.0.0-3.el8.x86_64.rpm\nwebkit2gtk3-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-devel-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.30.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.30.4-1.el8.x86_64.rpm\nwoff2-1.0.2-5.el8.i686.rpm\nwoff2-1.0.2-5.el8.x86_64.rpm\nwoff2-debuginfo-1.0.2-5.el8.i686.rpm\nwoff2-debuginfo-1.0.2-5.el8.x86_64.rpm\nwoff2-debugsource-1.0.2-5.el8.i686.rpm\nwoff2-debugsource-1.0.2-5.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\ngamin-0.1.10-32.el8.src.rpm\nglib2-2.56.4-9.el8.src.rpm\n\naarch64:\ngamin-0.1.10-32.el8.aarch64.rpm\ngamin-debuginfo-0.1.10-32.el8.aarch64.rpm\ngamin-debugsource-0.1.10-32.el8.aarch64.rpm\nglib2-2.56.4-9.el8.aarch64.rpm\nglib2-debuginfo-2.56.4-9.el8.aarch64.rpm\nglib2-debugsource-2.56.4-9.el8.aarch64.rpm\nglib2-devel-2.56.4-9.el8.aarch64.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.aarch64.rpm\nglib2-fam-2.56.4-9.el8.aarch64.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.aarch64.rpm\nglib2-tests-2.56.4-9.el8.aarch64.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.aarch64.rpm\n\nppc64le:\ngamin-0.1.10-32.el8.ppc64le.rpm\ngamin-debuginfo-0.1.10-32.el8.ppc64le.rpm\ngamin-debugsource-0.1.10-32.el8.ppc64le.rpm\nglib2-2.56.4-9.el8.ppc64le.rpm\nglib2-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglib2-debugsource-2.56.4-9.el8.ppc64le.rpm\nglib2-devel-2.56.4-9.el8.ppc64le.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglib2-fam-2.56.4-9.el8.ppc64le.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglib2-tests-2.56.4-9.el8.ppc64le.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.ppc64le.rpm\n\ns390x:\ngamin-0.1.10-32.el8.s390x.rpm\ngamin-debuginfo-0.1.10-32.el8.s390x.rpm\ngamin-debugsource-0.1.10-32.el8.s390x.rpm\nglib2-2.56.4-9.el8.s390x.rpm\nglib2-debuginfo-2.56.4-9.el8.s390x.rpm\nglib2-debugsource-2.56.4-9.el8.s390x.rpm\nglib2-devel-2.56.4-9.el8.s390x.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.s390x.rpm\nglib2-fam-2.56.4-9.el8.s390x.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.s390x.rpm\nglib2-tests-2.56.4-9.el8.s390x.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.s390x.rpm\n\nx86_64:\ngamin-0.1.10-32.el8.i686.rpm\ngamin-0.1.10-32.el8.x86_64.rpm\ngamin-debuginfo-0.1.10-32.el8.i686.rpm\ngamin-debuginfo-0.1.10-32.el8.x86_64.rpm\ngamin-debugsource-0.1.10-32.el8.i686.rpm\ngamin-debugsource-0.1.10-32.el8.x86_64.rpm\nglib2-2.56.4-9.el8.i686.rpm\nglib2-2.56.4-9.el8.x86_64.rpm\nglib2-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-debuginfo-2.56.4-9.el8.x86_64.rpm\nglib2-debugsource-2.56.4-9.el8.i686.rpm\nglib2-debugsource-2.56.4-9.el8.x86_64.rpm\nglib2-devel-2.56.4-9.el8.i686.rpm\nglib2-devel-2.56.4-9.el8.x86_64.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.x86_64.rpm\nglib2-fam-2.56.4-9.el8.x86_64.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.x86_64.rpm\nglib2-tests-2.56.4-9.el8.x86_64.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. 8):\n\nSource:\ngtk-doc-1.28-3.el8.src.rpm\nlibdazzle-3.28.5-2.el8.src.rpm\nlibepubgen-0.1.0-3.el8.src.rpm\nlibsass-3.4.5-6.el8.src.rpm\nvala-0.40.19-2.el8.src.rpm\n\naarch64:\nOpenEXR-debuginfo-2.2.0-12.el8.aarch64.rpm\nOpenEXR-debugsource-2.2.0-12.el8.aarch64.rpm\nOpenEXR-devel-2.2.0-12.el8.aarch64.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.aarch64.rpm\naccountsservice-debuginfo-0.6.55-1.el8.aarch64.rpm\naccountsservice-debugsource-0.6.55-1.el8.aarch64.rpm\naccountsservice-devel-0.6.55-1.el8.aarch64.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.aarch64.rpm\natkmm-debuginfo-2.24.2-7.el8.aarch64.rpm\natkmm-debugsource-2.24.2-7.el8.aarch64.rpm\natkmm-devel-2.24.2-7.el8.aarch64.rpm\ncairomm-debuginfo-1.12.0-8.el8.aarch64.rpm\ncairomm-debugsource-1.12.0-8.el8.aarch64.rpm\ncairomm-devel-1.12.0-8.el8.aarch64.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.aarch64.rpm\nenchant2-debuginfo-2.2.3-3.el8.aarch64.rpm\nenchant2-debugsource-2.2.3-3.el8.aarch64.rpm\nenchant2-devel-2.2.3-3.el8.aarch64.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.aarch64.rpm\ngamin-debuginfo-0.1.10-32.el8.aarch64.rpm\ngamin-debugsource-0.1.10-32.el8.aarch64.rpm\ngamin-devel-0.1.10-32.el8.aarch64.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.aarch64.rpm\ngeoclue2-debugsource-2.5.5-2.el8.aarch64.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.aarch64.rpm\ngeoclue2-devel-2.5.5-2.el8.aarch64.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.aarch64.rpm\ngjs-debuginfo-1.56.2-5.el8.aarch64.rpm\ngjs-debugsource-1.56.2-5.el8.aarch64.rpm\ngjs-devel-1.56.2-5.el8.aarch64.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.aarch64.rpm\nglib2-debuginfo-2.56.4-9.el8.aarch64.rpm\nglib2-debugsource-2.56.4-9.el8.aarch64.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.aarch64.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.aarch64.rpm\nglib2-static-2.56.4-9.el8.aarch64.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.aarch64.rpm\nglibmm24-debuginfo-2.56.0-2.el8.aarch64.rpm\nglibmm24-debugsource-2.56.0-2.el8.aarch64.rpm\nglibmm24-devel-2.56.0-2.el8.aarch64.rpm\ngtk-doc-1.28-3.el8.aarch64.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.aarch64.rpm\ngtkmm24-debugsource-2.24.5-6.el8.aarch64.rpm\ngtkmm24-devel-2.24.5-6.el8.aarch64.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.aarch64.rpm\ngtkmm30-debugsource-3.22.2-3.el8.aarch64.rpm\ngtkmm30-devel-3.22.2-3.el8.aarch64.rpm\nlibdazzle-3.28.5-2.el8.aarch64.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.aarch64.rpm\nlibdazzle-debugsource-3.28.5-2.el8.aarch64.rpm\nlibdazzle-devel-3.28.5-2.el8.aarch64.rpm\nlibepubgen-0.1.0-3.el8.aarch64.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.aarch64.rpm\nlibepubgen-debugsource-0.1.0-3.el8.aarch64.rpm\nlibepubgen-devel-0.1.0-3.el8.aarch64.rpm\nlibsass-3.4.5-6.el8.aarch64.rpm\nlibsass-debuginfo-3.4.5-6.el8.aarch64.rpm\nlibsass-debugsource-3.4.5-6.el8.aarch64.rpm\nlibsass-devel-3.4.5-6.el8.aarch64.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.aarch64.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.aarch64.rpm\nlibsigc++20-devel-2.10.0-6.el8.aarch64.rpm\nlibvisual-debuginfo-0.4.0-25.el8.aarch64.rpm\nlibvisual-debugsource-0.4.0-25.el8.aarch64.rpm\nlibvisual-devel-0.4.0-25.el8.aarch64.rpm\nmutter-debuginfo-3.32.2-57.el8.aarch64.rpm\nmutter-debugsource-3.32.2-57.el8.aarch64.rpm\nmutter-devel-3.32.2-57.el8.aarch64.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.aarch64.rpm\nnautilus-debuginfo-3.28.1-15.el8.aarch64.rpm\nnautilus-debugsource-3.28.1-15.el8.aarch64.rpm\nnautilus-devel-3.28.1-15.el8.aarch64.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.aarch64.rpm\npangomm-debuginfo-2.40.1-6.el8.aarch64.rpm\npangomm-debugsource-2.40.1-6.el8.aarch64.rpm\npangomm-devel-2.40.1-6.el8.aarch64.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.aarch64.rpm\nsoundtouch-debugsource-2.0.0-3.el8.aarch64.rpm\nsoundtouch-devel-2.0.0-3.el8.aarch64.rpm\nvala-0.40.19-2.el8.aarch64.rpm\nvala-debuginfo-0.40.19-2.el8.aarch64.rpm\nvala-debugsource-0.40.19-2.el8.aarch64.rpm\nvala-devel-0.40.19-2.el8.aarch64.rpm\nvaladoc-debuginfo-0.40.19-2.el8.aarch64.rpm\nwoff2-debuginfo-1.0.2-5.el8.aarch64.rpm\nwoff2-debugsource-1.0.2-5.el8.aarch64.rpm\nwoff2-devel-1.0.2-5.el8.aarch64.rpm\n\nnoarch:\natkmm-doc-2.24.2-7.el8.noarch.rpm\ncairomm-doc-1.12.0-8.el8.noarch.rpm\nglib2-doc-2.56.4-9.el8.noarch.rpm\nglibmm24-doc-2.56.0-2.el8.noarch.rpm\ngtkmm24-docs-2.24.5-6.el8.noarch.rpm\ngtkmm30-doc-3.22.2-3.el8.noarch.rpm\nlibsigc++20-doc-2.10.0-6.el8.noarch.rpm\npangomm-doc-2.40.1-6.el8.noarch.rpm\n\nppc64le:\nOpenEXR-debuginfo-2.2.0-12.el8.ppc64le.rpm\nOpenEXR-debugsource-2.2.0-12.el8.ppc64le.rpm\nOpenEXR-devel-2.2.0-12.el8.ppc64le.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.ppc64le.rpm\naccountsservice-debuginfo-0.6.55-1.el8.ppc64le.rpm\naccountsservice-debugsource-0.6.55-1.el8.ppc64le.rpm\naccountsservice-devel-0.6.55-1.el8.ppc64le.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.ppc64le.rpm\natkmm-debuginfo-2.24.2-7.el8.ppc64le.rpm\natkmm-debugsource-2.24.2-7.el8.ppc64le.rpm\natkmm-devel-2.24.2-7.el8.ppc64le.rpm\ncairomm-debuginfo-1.12.0-8.el8.ppc64le.rpm\ncairomm-debugsource-1.12.0-8.el8.ppc64le.rpm\ncairomm-devel-1.12.0-8.el8.ppc64le.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.ppc64le.rpm\nenchant2-debuginfo-2.2.3-3.el8.ppc64le.rpm\nenchant2-debugsource-2.2.3-3.el8.ppc64le.rpm\nenchant2-devel-2.2.3-3.el8.ppc64le.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.ppc64le.rpm\ngamin-debuginfo-0.1.10-32.el8.ppc64le.rpm\ngamin-debugsource-0.1.10-32.el8.ppc64le.rpm\ngamin-devel-0.1.10-32.el8.ppc64le.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-debugsource-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-devel-2.5.5-2.el8.ppc64le.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.ppc64le.rpm\ngjs-debuginfo-1.56.2-5.el8.ppc64le.rpm\ngjs-debugsource-1.56.2-5.el8.ppc64le.rpm\ngjs-devel-1.56.2-5.el8.ppc64le.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.ppc64le.rpm\nglib2-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglib2-debugsource-2.56.4-9.el8.ppc64le.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglib2-static-2.56.4-9.el8.ppc64le.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.ppc64le.rpm\nglibmm24-debuginfo-2.56.0-2.el8.ppc64le.rpm\nglibmm24-debugsource-2.56.0-2.el8.ppc64le.rpm\nglibmm24-devel-2.56.0-2.el8.ppc64le.rpm\ngtk-doc-1.28-3.el8.ppc64le.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.ppc64le.rpm\ngtkmm24-debugsource-2.24.5-6.el8.ppc64le.rpm\ngtkmm24-devel-2.24.5-6.el8.ppc64le.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.ppc64le.rpm\ngtkmm30-debugsource-3.22.2-3.el8.ppc64le.rpm\ngtkmm30-devel-3.22.2-3.el8.ppc64le.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.ppc64le.rpm\nlibdazzle-debugsource-3.28.5-2.el8.ppc64le.rpm\nlibdazzle-devel-3.28.5-2.el8.ppc64le.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.ppc64le.rpm\nlibepubgen-debugsource-0.1.0-3.el8.ppc64le.rpm\nlibepubgen-devel-0.1.0-3.el8.ppc64le.rpm\nlibsass-3.4.5-6.el8.ppc64le.rpm\nlibsass-debuginfo-3.4.5-6.el8.ppc64le.rpm\nlibsass-debugsource-3.4.5-6.el8.ppc64le.rpm\nlibsass-devel-3.4.5-6.el8.ppc64le.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.ppc64le.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.ppc64le.rpm\nlibsigc++20-devel-2.10.0-6.el8.ppc64le.rpm\nlibvisual-debuginfo-0.4.0-25.el8.ppc64le.rpm\nlibvisual-debugsource-0.4.0-25.el8.ppc64le.rpm\nlibvisual-devel-0.4.0-25.el8.ppc64le.rpm\nmutter-debuginfo-3.32.2-57.el8.ppc64le.rpm\nmutter-debugsource-3.32.2-57.el8.ppc64le.rpm\nmutter-devel-3.32.2-57.el8.ppc64le.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.ppc64le.rpm\nnautilus-debuginfo-3.28.1-15.el8.ppc64le.rpm\nnautilus-debugsource-3.28.1-15.el8.ppc64le.rpm\nnautilus-devel-3.28.1-15.el8.ppc64le.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.ppc64le.rpm\npangomm-debuginfo-2.40.1-6.el8.ppc64le.rpm\npangomm-debugsource-2.40.1-6.el8.ppc64le.rpm\npangomm-devel-2.40.1-6.el8.ppc64le.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.ppc64le.rpm\nsoundtouch-debugsource-2.0.0-3.el8.ppc64le.rpm\nsoundtouch-devel-2.0.0-3.el8.ppc64le.rpm\nvala-0.40.19-2.el8.ppc64le.rpm\nvala-debuginfo-0.40.19-2.el8.ppc64le.rpm\nvala-debugsource-0.40.19-2.el8.ppc64le.rpm\nvala-devel-0.40.19-2.el8.ppc64le.rpm\nvaladoc-debuginfo-0.40.19-2.el8.ppc64le.rpm\nwoff2-debuginfo-1.0.2-5.el8.ppc64le.rpm\nwoff2-debugsource-1.0.2-5.el8.ppc64le.rpm\nwoff2-devel-1.0.2-5.el8.ppc64le.rpm\n\ns390x:\nOpenEXR-debuginfo-2.2.0-12.el8.s390x.rpm\nOpenEXR-debugsource-2.2.0-12.el8.s390x.rpm\nOpenEXR-devel-2.2.0-12.el8.s390x.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.s390x.rpm\naccountsservice-debuginfo-0.6.55-1.el8.s390x.rpm\naccountsservice-debugsource-0.6.55-1.el8.s390x.rpm\naccountsservice-devel-0.6.55-1.el8.s390x.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.s390x.rpm\natkmm-debuginfo-2.24.2-7.el8.s390x.rpm\natkmm-debugsource-2.24.2-7.el8.s390x.rpm\natkmm-devel-2.24.2-7.el8.s390x.rpm\ncairomm-debuginfo-1.12.0-8.el8.s390x.rpm\ncairomm-debugsource-1.12.0-8.el8.s390x.rpm\ncairomm-devel-1.12.0-8.el8.s390x.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.s390x.rpm\nenchant2-debuginfo-2.2.3-3.el8.s390x.rpm\nenchant2-debugsource-2.2.3-3.el8.s390x.rpm\nenchant2-devel-2.2.3-3.el8.s390x.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.s390x.rpm\ngamin-debuginfo-0.1.10-32.el8.s390x.rpm\ngamin-debugsource-0.1.10-32.el8.s390x.rpm\ngamin-devel-0.1.10-32.el8.s390x.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.s390x.rpm\ngeoclue2-debugsource-2.5.5-2.el8.s390x.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.s390x.rpm\ngeoclue2-devel-2.5.5-2.el8.s390x.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.s390x.rpm\ngjs-debuginfo-1.56.2-5.el8.s390x.rpm\ngjs-debugsource-1.56.2-5.el8.s390x.rpm\ngjs-devel-1.56.2-5.el8.s390x.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.s390x.rpm\nglib2-debuginfo-2.56.4-9.el8.s390x.rpm\nglib2-debugsource-2.56.4-9.el8.s390x.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.s390x.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.s390x.rpm\nglib2-static-2.56.4-9.el8.s390x.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.s390x.rpm\nglibmm24-debuginfo-2.56.0-2.el8.s390x.rpm\nglibmm24-debugsource-2.56.0-2.el8.s390x.rpm\nglibmm24-devel-2.56.0-2.el8.s390x.rpm\ngtk-doc-1.28-3.el8.s390x.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.s390x.rpm\ngtkmm24-debugsource-2.24.5-6.el8.s390x.rpm\ngtkmm24-devel-2.24.5-6.el8.s390x.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.s390x.rpm\ngtkmm30-debugsource-3.22.2-3.el8.s390x.rpm\ngtkmm30-devel-3.22.2-3.el8.s390x.rpm\nlibdazzle-3.28.5-2.el8.s390x.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.s390x.rpm\nlibdazzle-debugsource-3.28.5-2.el8.s390x.rpm\nlibdazzle-devel-3.28.5-2.el8.s390x.rpm\nlibepubgen-0.1.0-3.el8.s390x.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.s390x.rpm\nlibepubgen-debugsource-0.1.0-3.el8.s390x.rpm\nlibepubgen-devel-0.1.0-3.el8.s390x.rpm\nlibsass-3.4.5-6.el8.s390x.rpm\nlibsass-debuginfo-3.4.5-6.el8.s390x.rpm\nlibsass-debugsource-3.4.5-6.el8.s390x.rpm\nlibsass-devel-3.4.5-6.el8.s390x.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.s390x.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.s390x.rpm\nlibsigc++20-devel-2.10.0-6.el8.s390x.rpm\nlibvisual-debuginfo-0.4.0-25.el8.s390x.rpm\nlibvisual-debugsource-0.4.0-25.el8.s390x.rpm\nlibvisual-devel-0.4.0-25.el8.s390x.rpm\nmutter-debuginfo-3.32.2-57.el8.s390x.rpm\nmutter-debugsource-3.32.2-57.el8.s390x.rpm\nmutter-devel-3.32.2-57.el8.s390x.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.s390x.rpm\nnautilus-debuginfo-3.28.1-15.el8.s390x.rpm\nnautilus-debugsource-3.28.1-15.el8.s390x.rpm\nnautilus-devel-3.28.1-15.el8.s390x.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.s390x.rpm\npangomm-debuginfo-2.40.1-6.el8.s390x.rpm\npangomm-debugsource-2.40.1-6.el8.s390x.rpm\npangomm-devel-2.40.1-6.el8.s390x.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.s390x.rpm\nsoundtouch-debugsource-2.0.0-3.el8.s390x.rpm\nsoundtouch-devel-2.0.0-3.el8.s390x.rpm\nvala-0.40.19-2.el8.s390x.rpm\nvala-debuginfo-0.40.19-2.el8.s390x.rpm\nvala-debugsource-0.40.19-2.el8.s390x.rpm\nvala-devel-0.40.19-2.el8.s390x.rpm\nvaladoc-debuginfo-0.40.19-2.el8.s390x.rpm\nwoff2-debuginfo-1.0.2-5.el8.s390x.rpm\nwoff2-debugsource-1.0.2-5.el8.s390x.rpm\nwoff2-devel-1.0.2-5.el8.s390x.rpm\n\nx86_64:\nOpenEXR-debuginfo-2.2.0-12.el8.i686.rpm\nOpenEXR-debuginfo-2.2.0-12.el8.x86_64.rpm\nOpenEXR-debugsource-2.2.0-12.el8.i686.rpm\nOpenEXR-debugsource-2.2.0-12.el8.x86_64.rpm\nOpenEXR-devel-2.2.0-12.el8.i686.rpm\nOpenEXR-devel-2.2.0-12.el8.x86_64.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.i686.rpm\nOpenEXR-libs-debuginfo-2.2.0-12.el8.x86_64.rpm\naccountsservice-debuginfo-0.6.55-1.el8.i686.rpm\naccountsservice-debuginfo-0.6.55-1.el8.x86_64.rpm\naccountsservice-debugsource-0.6.55-1.el8.i686.rpm\naccountsservice-debugsource-0.6.55-1.el8.x86_64.rpm\naccountsservice-devel-0.6.55-1.el8.i686.rpm\naccountsservice-devel-0.6.55-1.el8.x86_64.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.i686.rpm\naccountsservice-libs-debuginfo-0.6.55-1.el8.x86_64.rpm\natkmm-debuginfo-2.24.2-7.el8.i686.rpm\natkmm-debuginfo-2.24.2-7.el8.x86_64.rpm\natkmm-debugsource-2.24.2-7.el8.i686.rpm\natkmm-debugsource-2.24.2-7.el8.x86_64.rpm\natkmm-devel-2.24.2-7.el8.i686.rpm\natkmm-devel-2.24.2-7.el8.x86_64.rpm\ncairomm-debuginfo-1.12.0-8.el8.i686.rpm\ncairomm-debuginfo-1.12.0-8.el8.x86_64.rpm\ncairomm-debugsource-1.12.0-8.el8.i686.rpm\ncairomm-debugsource-1.12.0-8.el8.x86_64.rpm\ncairomm-devel-1.12.0-8.el8.i686.rpm\ncairomm-devel-1.12.0-8.el8.x86_64.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.i686.rpm\nenchant2-aspell-debuginfo-2.2.3-3.el8.x86_64.rpm\nenchant2-debuginfo-2.2.3-3.el8.i686.rpm\nenchant2-debuginfo-2.2.3-3.el8.x86_64.rpm\nenchant2-debugsource-2.2.3-3.el8.i686.rpm\nenchant2-debugsource-2.2.3-3.el8.x86_64.rpm\nenchant2-devel-2.2.3-3.el8.i686.rpm\nenchant2-devel-2.2.3-3.el8.x86_64.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.i686.rpm\nenchant2-voikko-debuginfo-2.2.3-3.el8.x86_64.rpm\ngamin-debuginfo-0.1.10-32.el8.i686.rpm\ngamin-debuginfo-0.1.10-32.el8.x86_64.rpm\ngamin-debugsource-0.1.10-32.el8.i686.rpm\ngamin-debugsource-0.1.10-32.el8.x86_64.rpm\ngamin-devel-0.1.10-32.el8.i686.rpm\ngamin-devel-0.1.10-32.el8.x86_64.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.i686.rpm\ngeoclue2-debuginfo-2.5.5-2.el8.x86_64.rpm\ngeoclue2-debugsource-2.5.5-2.el8.i686.rpm\ngeoclue2-debugsource-2.5.5-2.el8.x86_64.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.i686.rpm\ngeoclue2-demos-debuginfo-2.5.5-2.el8.x86_64.rpm\ngeoclue2-devel-2.5.5-2.el8.i686.rpm\ngeoclue2-devel-2.5.5-2.el8.x86_64.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.i686.rpm\ngeoclue2-libs-debuginfo-2.5.5-2.el8.x86_64.rpm\ngjs-debuginfo-1.56.2-5.el8.i686.rpm\ngjs-debuginfo-1.56.2-5.el8.x86_64.rpm\ngjs-debugsource-1.56.2-5.el8.i686.rpm\ngjs-debugsource-1.56.2-5.el8.x86_64.rpm\ngjs-devel-1.56.2-5.el8.i686.rpm\ngjs-devel-1.56.2-5.el8.x86_64.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.i686.rpm\ngjs-tests-debuginfo-1.56.2-5.el8.x86_64.rpm\nglib2-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-debuginfo-2.56.4-9.el8.x86_64.rpm\nglib2-debugsource-2.56.4-9.el8.i686.rpm\nglib2-debugsource-2.56.4-9.el8.x86_64.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-devel-debuginfo-2.56.4-9.el8.x86_64.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-fam-debuginfo-2.56.4-9.el8.x86_64.rpm\nglib2-static-2.56.4-9.el8.i686.rpm\nglib2-static-2.56.4-9.el8.x86_64.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.i686.rpm\nglib2-tests-debuginfo-2.56.4-9.el8.x86_64.rpm\nglibmm24-debuginfo-2.56.0-2.el8.i686.rpm\nglibmm24-debuginfo-2.56.0-2.el8.x86_64.rpm\nglibmm24-debugsource-2.56.0-2.el8.i686.rpm\nglibmm24-debugsource-2.56.0-2.el8.x86_64.rpm\nglibmm24-devel-2.56.0-2.el8.i686.rpm\nglibmm24-devel-2.56.0-2.el8.x86_64.rpm\ngtk-doc-1.28-3.el8.x86_64.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.i686.rpm\ngtkmm24-debuginfo-2.24.5-6.el8.x86_64.rpm\ngtkmm24-debugsource-2.24.5-6.el8.i686.rpm\ngtkmm24-debugsource-2.24.5-6.el8.x86_64.rpm\ngtkmm24-devel-2.24.5-6.el8.i686.rpm\ngtkmm24-devel-2.24.5-6.el8.x86_64.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.i686.rpm\ngtkmm30-debuginfo-3.22.2-3.el8.x86_64.rpm\ngtkmm30-debugsource-3.22.2-3.el8.i686.rpm\ngtkmm30-debugsource-3.22.2-3.el8.x86_64.rpm\ngtkmm30-devel-3.22.2-3.el8.i686.rpm\ngtkmm30-devel-3.22.2-3.el8.x86_64.rpm\ngvfs-1.36.2-11.el8.i686.rpm\ngvfs-afc-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-afp-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-archive-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-client-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-debugsource-1.36.2-11.el8.i686.rpm\ngvfs-fuse-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-goa-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-gphoto2-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-mtp-debuginfo-1.36.2-11.el8.i686.rpm\ngvfs-smb-debuginfo-1.36.2-11.el8.i686.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.i686.rpm\nlibdazzle-debuginfo-3.28.5-2.el8.x86_64.rpm\nlibdazzle-debugsource-3.28.5-2.el8.i686.rpm\nlibdazzle-debugsource-3.28.5-2.el8.x86_64.rpm\nlibdazzle-devel-3.28.5-2.el8.i686.rpm\nlibdazzle-devel-3.28.5-2.el8.x86_64.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.i686.rpm\nlibepubgen-debuginfo-0.1.0-3.el8.x86_64.rpm\nlibepubgen-debugsource-0.1.0-3.el8.i686.rpm\nlibepubgen-debugsource-0.1.0-3.el8.x86_64.rpm\nlibepubgen-devel-0.1.0-3.el8.i686.rpm\nlibepubgen-devel-0.1.0-3.el8.x86_64.rpm\nlibsass-3.4.5-6.el8.i686.rpm\nlibsass-3.4.5-6.el8.x86_64.rpm\nlibsass-debuginfo-3.4.5-6.el8.i686.rpm\nlibsass-debuginfo-3.4.5-6.el8.x86_64.rpm\nlibsass-debugsource-3.4.5-6.el8.i686.rpm\nlibsass-debugsource-3.4.5-6.el8.x86_64.rpm\nlibsass-devel-3.4.5-6.el8.i686.rpm\nlibsass-devel-3.4.5-6.el8.x86_64.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.i686.rpm\nlibsigc++20-debuginfo-2.10.0-6.el8.x86_64.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.i686.rpm\nlibsigc++20-debugsource-2.10.0-6.el8.x86_64.rpm\nlibsigc++20-devel-2.10.0-6.el8.i686.rpm\nlibsigc++20-devel-2.10.0-6.el8.x86_64.rpm\nlibvisual-debuginfo-0.4.0-25.el8.i686.rpm\nlibvisual-debuginfo-0.4.0-25.el8.x86_64.rpm\nlibvisual-debugsource-0.4.0-25.el8.i686.rpm\nlibvisual-debugsource-0.4.0-25.el8.x86_64.rpm\nlibvisual-devel-0.4.0-25.el8.i686.rpm\nlibvisual-devel-0.4.0-25.el8.x86_64.rpm\nmutter-debuginfo-3.32.2-57.el8.i686.rpm\nmutter-debuginfo-3.32.2-57.el8.x86_64.rpm\nmutter-debugsource-3.32.2-57.el8.i686.rpm\nmutter-debugsource-3.32.2-57.el8.x86_64.rpm\nmutter-devel-3.32.2-57.el8.i686.rpm\nmutter-devel-3.32.2-57.el8.x86_64.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.i686.rpm\nmutter-tests-debuginfo-3.32.2-57.el8.x86_64.rpm\nnautilus-3.28.1-15.el8.i686.rpm\nnautilus-debuginfo-3.28.1-15.el8.i686.rpm\nnautilus-debuginfo-3.28.1-15.el8.x86_64.rpm\nnautilus-debugsource-3.28.1-15.el8.i686.rpm\nnautilus-debugsource-3.28.1-15.el8.x86_64.rpm\nnautilus-devel-3.28.1-15.el8.i686.rpm\nnautilus-devel-3.28.1-15.el8.x86_64.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.i686.rpm\nnautilus-extensions-debuginfo-3.28.1-15.el8.x86_64.rpm\npangomm-debuginfo-2.40.1-6.el8.i686.rpm\npangomm-debuginfo-2.40.1-6.el8.x86_64.rpm\npangomm-debugsource-2.40.1-6.el8.i686.rpm\npangomm-debugsource-2.40.1-6.el8.x86_64.rpm\npangomm-devel-2.40.1-6.el8.i686.rpm\npangomm-devel-2.40.1-6.el8.x86_64.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.i686.rpm\nsoundtouch-debuginfo-2.0.0-3.el8.x86_64.rpm\nsoundtouch-debugsource-2.0.0-3.el8.i686.rpm\nsoundtouch-debugsource-2.0.0-3.el8.x86_64.rpm\nsoundtouch-devel-2.0.0-3.el8.i686.rpm\nsoundtouch-devel-2.0.0-3.el8.x86_64.rpm\nvala-0.40.19-2.el8.i686.rpm\nvala-0.40.19-2.el8.x86_64.rpm\nvala-debuginfo-0.40.19-2.el8.i686.rpm\nvala-debuginfo-0.40.19-2.el8.x86_64.rpm\nvala-debugsource-0.40.19-2.el8.i686.rpm\nvala-debugsource-0.40.19-2.el8.x86_64.rpm\nvala-devel-0.40.19-2.el8.i686.rpm\nvala-devel-0.40.19-2.el8.x86_64.rpm\nvaladoc-debuginfo-0.40.19-2.el8.i686.rpm\nvaladoc-debuginfo-0.40.19-2.el8.x86_64.rpm\nwoff2-debuginfo-1.0.2-5.el8.i686.rpm\nwoff2-debuginfo-1.0.2-5.el8.x86_64.rpm\nwoff2-debugsource-1.0.2-5.el8.i686.rpm\nwoff2-debugsource-1.0.2-5.el8.x86_64.rpm\nwoff2-devel-1.0.2-5.el8.i686.rpm\nwoff2-devel-1.0.2-5.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-13012\nhttps://access.redhat.com/security/cve/CVE-2020-9948\nhttps://access.redhat.com/security/cve/CVE-2020-9951\nhttps://access.redhat.com/security/cve/CVE-2020-9983\nhttps://access.redhat.com/security/cve/CVE-2020-13543\nhttps://access.redhat.com/security/cve/CVE-2020-13584\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2020-11-13-3 Additional information for\nAPPLE-SA-2020-09-16-1 iOS 14.0 and iPadOS 14.0\n\niOS 14.0 and iPadOS 14.0 addresses the following issues. Information\nabout the security content is also available at\nhttps://support.apple.com/HT211850. \n\nAppleAVD\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An application may be able to cause unexpected system\ntermination or write kernel memory\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-9958: Mohamed Ghannam (@_simo36)\n\nAssets\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An attacker may be able to misuse a trust relationship to\ndownload malicious content\nDescription: A trust issue was addressed by removing a legacy API. \nCVE-2020-9979: CodeColorist of LightYear Security Lab of AntGroup\nEntry updated November 12, 2020\n\nAudio\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious application may be able to read restricted memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-9943: JunDong Xie of Ant Group Light-Year Security Lab\nEntry added November 12, 2020\n\nAudio\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An application may be able to read restricted memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-9944: JunDong Xie of Ant Group Light-Year Security Lab\nEntry added November 12, 2020\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Playing a malicious audio file may lead to arbitrary code\nexecution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2020-9954: Francis working with Trend Micro Zero Day Initiative,\nJunDong Xie of Ant Group Light-Year Security Lab\nEntry added November 12, 2020\n\nCoreCapture\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2020-9949: Proteas\nEntry added November 12, 2020\n\nDisk Images\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-9965: Proteas\nCVE-2020-9966: Proteas\nEntry added November 12, 2020\n\nIcons\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious application may be able to identify what other\napplications a user has installed\nDescription: The issue was addressed with improved handling of icon\ncaches. \nCVE-2020-9773: Chilik Tamir of Zimperium zLabs\n\nIDE Device Support\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An attacker in a privileged network position may be able to\nexecute arbitrary code on a paired device during a debug session over\nthe network\nDescription: This issue was addressed by encrypting communications\nover the network to devices running iOS 14, iPadOS 14, tvOS 14, and\nwatchOS 7. \nCVE-2020-9992: Dany Lisiansky (@DanyL931), Nikias Bassen of Zimperium\nzLabs\nEntry updated September 17, 2020\n\nImageIO\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-9961: Xingwei Lin of Ant Security Light-Year Lab\nEntry added November 12, 2020\n\nImageIO\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-9876: Mickey Jin of Trend Micro\nEntry added November 12, 2020\n\nIOSurfaceAccelerator\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A local user may be able to read kernel memory\nDescription: A memory initialization issue was addressed with\nimproved memory handling. \nCVE-2020-9964: Mohamed Ghannam (@_simo36), Tommy Muir (@Muirey03)\n\nKernel\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An attacker in a privileged network position may be able to\ninject into active connections within a VPN tunnel\nDescription: A routing issue was addressed with improved\nrestrictions. \nCVE-2019-14899: William J. Tolley, Beau Kujath, and Jedidiah R. \nCrandall\nEntry added November 12, 2020\n\nKeyboard\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-9976: Rias A. Sherzad of JAIDE GmbH in Hamburg, Germany\n\nlibxml2\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2020-9981: found by OSS-Fuzz\nEntry added November 12, 2020\n\nMail\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A remote attacker may be able to unexpectedly alter\napplication state\nDescription: This issue was addressed with improved checks. \nCVE-2020-9941: Fabian Ising of FH M\u00fcnster University of Applied\nSciences and Damian Poddebniak of FH M\u00fcnster University of Applied\nSciences\nEntry added November 12, 2020\n\nMessages\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A local user may be able to discover a user\u2019s deleted\nmessages\nDescription: The issue was addressed with improved deletion. \nCVE-2020-9988: William Breuer of the Netherlands\nCVE-2020-9989: von Brunn Media\nEntry added November 12, 2020\n\nModel I/O\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-13520: Aleksandar Nikolic of Cisco Talos\nEntry added November 12, 2020\n\nModel I/O\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2020-6147: Aleksandar Nikolic of Cisco Talos\nCVE-2020-9972: Aleksandar Nikolic of Cisco Talos\nEntry added November 12, 2020\n\nModel I/O\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-9973: Aleksandar Nikolic of Cisco Talos\n\nNetworkExtension\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious application may be able to elevate privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2020-9996: Zhiwei Yuan of Trend Micro iCore Team, Junzhi Lu and\nMickey Jin of Trend Micro\nEntry added November 12, 2020\n\nPhone\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: The screen lock may not engage after the specified time\nperiod\nDescription: This issue was addressed with improved checks. \nCVE-2020-9946: Daniel Larsson of iolight AB\n\nQuick Look\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious app may be able to determine the existence of\nfiles on the computer\nDescription: The issue was addressed with improved handling of icon\ncaches. \nCVE-2020-9963: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added November 12, 2020\n\nSafari\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious application may be able to determine a user\u0027s\nopen tabs in Safari\nDescription: A validation issue existed in the entitlement\nverification. \nCVE-2020-9977: Josh Parnham (@joshparnham)\nEntry added November 12, 2020\n\nSafari\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Visiting a malicious website may lead to address bar spoofing\nDescription: The issue was addressed with improved UI handling. \nCVE-2020-9993: Masato Sugiyama (@smasato) of University of Tsukuba,\nPiotr Duszynski\nEntry added November 12, 2020\n\nSandbox\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A local user may be able to view senstive user information\nDescription: An access issue was addressed with additional sandbox\nrestrictions. \nCVE-2020-9969: Wojciech Regu\u0142a of SecuRing (wojciechregula.blog)\nEntry added November 12, 2020\n\nSandbox\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A malicious application may be able to access restricted\nfiles\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2020-9968: Adam Chester (@_xpn_) of TrustedSec\nEntry updated September 17, 2020\n\nSiri\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A person with physical access to an iOS device may be able to\nview notification contents from the lockscreen\nDescription: A lock screen issue allowed access to messages on a\nlocked device. \nCVE-2020-9959: an anonymous researcher, an anonymous researcher, an\nanonymous researcher, an anonymous researcher, an anonymous\nresearcher, Andrew Goldberg The University of Texas at Austin,\nMcCombs School of Business, Meli\u0307h Kerem G\u00fcne\u015f of Li\u0307v College, Sinan\nGulguler\n\nSQLite\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2020-13434\nCVE-2020-13435\nCVE-2020-9991\nEntry added November 12, 2020\n\nSQLite\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A remote attacker may be able to leak memory\nDescription: An information disclosure issue was addressed with\nimproved state management. \nCVE-2020-9849\nEntry added November 12, 2020\n\nSQLite\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Multiple issues in SQLite\nDescription: Multiple issues were addressed by updating SQLite to\nversion 3.32.3. \nCVE-2020-15358\nEntry added November 12, 2020\n\nSQLite\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A maliciously crafted SQL query may lead to data corruption\nDescription: This issue was addressed with improved checks. \nCVE-2020-13631\nEntry added November 12, 2020\n\nSQLite\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: A remote attacker may be able to cause arbitrary code\nexecution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2020-13630\nEntry added November 12, 2020\n\nWebKit\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2020-9947: cc working with Trend Micro Zero Day Initiative\nCVE-2020-9950: cc working with Trend Micro Zero Day Initiative\nCVE-2020-9951: Marcin \u0027Icewall\u0027 Noga of Cisco Talos\nEntry added November 12, 2020\n\nWebKit\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-9983: zhunki\nEntry added November 12, 2020\n\nWebKit\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: Processing maliciously crafted web content may lead to a\ncross site scripting attack\nDescription: An input validation issue was addressed with improved\ninput validation. \nCVE-2020-9952: Ryan Pickren (ryanpickren.com)\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPod touch 7th generation, iPad\nAir 2 and later, and iPad mini 4 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-10013: Yu Wang of Didi Research America\nEntry added November 12, 2020\n\nAdditional recognition\n\nApp Store\nWe would like to acknowledge Giyas Umarov of Holmdel High School for\ntheir assistance. \n\nAudio\nWe would like to acknowledge JunDong Xie and XingWei Lin of Ant-\nfinancial Light-Year Security Lab for their assistance. \nEntry added November 12, 2020\n\nBluetooth\nWe would like to acknowledge Andy Davis of NCC Group and Dennis\nHeinze (@ttdennis) of TU Darmstadt, Secure Mobile Networking Lab for\ntheir assistance. \n\nCallKit\nWe would like to acknowledge Federico Zanetello for their assistance. \n\nCarPlay\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nClang\nWe would like to acknowledge Brandon Azad of Google Project Zero for\ntheir assistance. \nEntry added November 12, 2020\n\nCore Location\nWe would like to acknowledge Yi\u011fit Can YILMAZ (@yilmazcanyigit) for\ntheir assistance. \n\ndebugserver\nWe would like to acknowledge Linus Henze (pinauten.de) for their\nassistance. \n\niAP\nWe would like to acknowledge Andy Davis of NCC Group for their\nassistance. \n\niBoot\nWe would like to acknowledge Brandon Azad of Google Project Zero for\ntheir assistance. \n\nKernel\nWe would like to acknowledge Brandon Azad of Google Project Zero,\nStephen R\u00f6ttger of Google for their assistance. \nEntry updated November 12, 2020\n\nlibarchive\nWe would like to acknowledge Dzmitry Plotnikau and an anonymous\nresearcher for their assistance. \n\nlldb\nWe would like to acknowledge Linus Henze (pinauten.de) for their\nassistance. \nEntry added November 12, 2020\n\nLocation Framework\nWe would like to acknowledge Nicolas Brunner\n(linkedin.com/in/nicolas-brunner-651bb4128) for their assistance. \nEntry updated October 19, 2020\n\nMail\nWe would like to acknowledge an anonymous researcher for their\nassistance. \nEntry added November 12, 2020\n\nMail Drafts\nWe would like to acknowledge Jon Bottarini of HackerOne for their\nassistance. \nEntry added November 12, 2020\n\nMaps\nWe would like to acknowledge Matthew Dolan of Amazon Alexa for their\nassistance. \n\nNetworkExtension\nWe would like to acknowledge Thijs Alkemade of Computest and \u2018Qubo\nSong\u2019 of \u2018Symantec, a division of Broadcom\u2019 for their assistance. \n\nPhone Keypad\nWe would like to acknowledge Hasan Fahrettin Kaya of Akdeniz\nUniversity, an anonymous researcher for their assistance. \nEntry updated November 12, 2020 \n\nSafari\nWe would like to acknowledge Andreas Gutmann (@KryptoAndI) of\nOneSpan\u0027s Innovation Centre (onespan.com) and University College\nLondon, Steven J. Murdoch (@SJMurdoch) of OneSpan\u0027s Innovation Centre\n(onespan.com) and University College London, Jack Cable of Lightning\nSecurity, Ryan Pickren (ryanpickren.com), Yair Amit for their\nassistance. \nEntry added November 12, 2020\n\nSafari Reader\nWe would like to acknowledge Zhiyang Zeng(@Wester) of OPPO ZIWU\nSecurity Lab for their assistance. \nEntry added November 12, 2020\n\nSecurity\nWe would like to acknowledge Christian Starkjohann of Objective\nDevelopment Software GmbH for their assistance. \nEntry added November 12, 2020\n\nStatus Bar\nWe would like to acknowledge Abdul M. Majumder, Abdullah Fasihallah\nof Taif university, Adwait Vikas Bhide, Frederik Schmid, Nikita, and\nan anonymous researcher for their assistance. \n\nTelephony\nWe would like to acknowledge Onur Can B\u0131kmaz, Vodafone Turkey\n@canbkmaz, Yi\u011fit Can YILMAZ (@yilmazcanyigit), an anonymous\nresearcher for their assistance. \nEntry updated November 12, 2020\n\nUIKit\nWe would like to acknowledge Borja Marcos of Sarenet, Simon de Vegt,\nand Talal Haj Bakry (@hajbakri) and Tommy Mysk (@tommymysk) of Mysk\nInc for their assistance. \n\nWeb App\nWe would like to acknowledge Augusto Alvarez of Outcourse Limited for\ntheir assistance. \n\nWebKit\nWe would like to acknowledge Pawel Wylecial of REDTEAM.PL, Ryan\nPickren (ryanpickren.com), Tsubasa FUJII (@reinforchu), Zhiyang\nZeng(@Wester) of OPPO ZIWU Security Lab for their assistance. \nEntry added November 12, 2020\n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \n\nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n\n* Navigate to Settings\n* Select General\n* Select About. The version after applying this update\nwill be \"iOS 14.0 and iPadOS 14.0\". \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAl+uxqgACgkQZcsbuWJ6\njjBhIhAAhLzDSjgjVzG0JLzEerhFBcAWQ1G8ogmIdxuC0aQfvxO4V1NriKzUcmsZ\nUgQCEdN4kzfLsj3KeuwSeq0pg2CX1eZdgY/FyuOBRzljsmGPXJgkyYapJww6mC8n\n7jeJazKusiyaRmScLYDwvbOQGlaqCfu6HrM9umMpLfwPGjFqe/gz8jyxohdVZx9t\npNC0g9l37dVJIvFRc1mAm9HAnIQoL8CDOEd96jVYiecB8xk0X6CwjZ7nGzYJc5LZ\nA54EaN0dDz+8q8jgylmAd8xkA8Pgdsxw+LWDr1TxPuu3XIzYa98S1AsItK2eiWx8\npIhrzVZ3fk1w3+W/cSWrgzUq4ouijWcWw9dmVgxmzv9ldL/pS+wIgFsYLJm4xHAp\nPH+9p3JmMQks9BWgr3h+NEcJwCUm5J7y0PNuCnQL2iKzn4jikqgfCXHZOidkPV3t\nKjeeIFX30AGI7cUqhRl9GbRn8l5SA4pbd4a0Y5df1PgkDjSXxw91Z1+5S15Qfrzs\nK8pBlPH37yU3aqMEvxBsN5Fd7vdFdA+pV/aWG5tw4pUlZJC25c50w1ZW0vrnsisg\n/isPJqXhUWiGAfQ7s5W6W3AMs4PyvRjY+7zzGiHAd+wNkUNwVTbXvKP4W4n/vGH8\nuARpQRQsureymLerXpVTwH8ZoeDEeZZwaqNHTQKg/M9ifAZPZUA=\n=WdqR\n-----END PGP SIGNATURE-----\n\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1328 - Port fix to 5.0.z for BZ-1945168\n\n6. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.13. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2122\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nThis update fixes the following bug among others:\n\n* Previously, resources for the ClusterOperator were being created early in\nthe update process, which led to update failures when the ClusterOperator\nhad no status condition while Operators were updating. This bug fix changes\nthe timing of when these resources are created. As a result, updates can\ntake place without errors. (BZ#1959238)\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-x86_64\n\nThe image digest is\nsha256:783a2c963f35ccab38e82e6a8c7fa954c3a4551e07d2f43c06098828dd986ed4\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-s390x\n\nThe image digest is\nsha256:4cf44e68413acad063203e1ee8982fd01d8b9c1f8643a5b31cd7ff341b3199cd\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.13-ppc64le\n\nThe image digest is\nsha256:d47ce972f87f14f1f3c5d50428d2255d1256dae3f45c938ace88547478643e36\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923268 - [Assisted-4.7] [Staging] Using two both spelling \"canceled\"  \"cancelled\"\n1947216 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1953963 - Enable/Disable host operations returns cluster resource with incomplete hosts list\n1957749 - ovn-kubernetes pod should have CPU and memory requests set but not limits\n1959238 - CVO creating cloud-controller-manager too early causing upgrade failures\n1960103 - SR-IOV obliviously reboot the node\n1961941 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1962302 - packageserver clusteroperator does not set reason or message for Available condition\n1962312 - Deployment considered unhealthy despite being available and at latest generation\n1962435 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1963115 - Test verify /run filesystem contents failing\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202012-10\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                            https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n  Severity: Normal\n     Title: WebkitGTK+: Multiple vulnerabilities\n      Date: December 23, 2020\n      Bugs: #755947\n        ID: 202012-10\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nBackground\n==========\n\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n=================\n\n     -------------------------------------------------------------------\n      Package              /     Vulnerable     /            Unaffected\n     -------------------------------------------------------------------\n   1  net-libs/webkit-gtk          \u003c 2.30.3                  \u003e= 2.30.3\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n   # emerge --sync\n   # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.30.3\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-13543\n       https://nvd.nist.gov/vuln/detail/CVE-2020-13543\n[ 2 ] CVE-2020-13584\n       https://nvd.nist.gov/vuln/detail/CVE-2020-13584\n[ 3 ] CVE-2020-9948\n       https://nvd.nist.gov/vuln/detail/CVE-2020-9948\n[ 4 ] CVE-2020-9951\n       https://nvd.nist.gov/vuln/detail/CVE-2020-9951\n[ 5 ] CVE-2020-9952\n       https://nvd.nist.gov/vuln/detail/CVE-2020-9952\n[ 6 ] CVE-2020-9983\n       https://nvd.nist.gov/vuln/detail/CVE-2020-9983\n[ 7 ] WSA-2020-0008\n       https://webkitgtk.org/security/WSA-2020-0008.html\n[ 8 ] WSA-2020-0009\n       https://webkitgtk.org/security/WSA-2020-0009.html\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n  https://security.gentoo.org/glsa/202012-10\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. \nCVE-2020-9983: zhunki\n\nInstallation note:\n\nSafari 14.0 may be obtained from the Mac App Store. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\"",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "162689"
      },
      {
        "db": "PACKETSTORM",
        "id": "160061"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "162877"
      },
      {
        "db": "PACKETSTORM",
        "id": "160701"
      },
      {
        "db": "PACKETSTORM",
        "id": "159227"
      },
      {
        "db": "PACKETSTORM",
        "id": "160062"
      },
      {
        "db": "PACKETSTORM",
        "id": "160064"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-9951",
        "trust": 2.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2020/11/23/3",
        "trust": 1.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160064",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "160701",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "160061",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "162689",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "159227",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "160062",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "160063",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-188076",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9951",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163789",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162837",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162877",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "162689"
      },
      {
        "db": "PACKETSTORM",
        "id": "160061"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "162877"
      },
      {
        "db": "PACKETSTORM",
        "id": "160701"
      },
      {
        "db": "PACKETSTORM",
        "id": "159227"
      },
      {
        "db": "PACKETSTORM",
        "id": "160062"
      },
      {
        "db": "PACKETSTORM",
        "id": "160064"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "id": "VAR-202010-1511",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:22:28.189000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": null,
        "trust": 0.1,
        "url": "https://www.theregister.co.uk/2020/09/21/russians_charged_for_168m_cryptocoin/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-416",
        "trust": 1.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.2,
        "url": "https://security.gentoo.org/glsa/202012-10"
      },
      {
        "trust": 1.2,
        "url": "https://support.apple.com/ht211845"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht211843"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht211844"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht211850"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht211935"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht211952"
      },
      {
        "trust": 1.1,
        "url": "https://www.debian.org/security/2020/dsa-4797"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2020/nov/20"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2020/nov/19"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2020/nov/18"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2020/nov/22"
      },
      {
        "trust": 1.1,
        "url": "http://www.openwall.com/lists/oss-security/2020/11/23/3"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9951"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9983"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9952"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13543"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9951"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9948"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13012"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13584"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584"
      },
      {
        "trust": 0.4,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9983"
      },
      {
        "trust": 0.4,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14347"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-25712"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14363"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-26137"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14360"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-12362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14345"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14344"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14361"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14346"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9948"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9961"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9947"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9944"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9954"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9943"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9965"
      },
      {
        "trust": 0.3,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9876"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13630"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9949"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9849"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9950"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14346"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36242"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25037"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28935"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25034"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25035"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-14866"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25038"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14345"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25042"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25659"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25032"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25041"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25036"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14344"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25039"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25040"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9941"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9946"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36322"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-12114"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12114"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27835"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25704"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-3842"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13776"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24977"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-10878"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19528"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0431"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-18811"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-19528"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-12464"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14314"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14356"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27786"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25643"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24394"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-0431"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-0342"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18811"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-19523"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10543"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25285"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35508"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25212"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19523"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28974"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-10543"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15437"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25284"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10878"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11608"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-11608"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9981"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9991"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9976"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9968"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9966"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9969"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/416.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://exchange.xforce.ibmcloud.com/vulnerabilities/188409"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23240"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23239"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20201"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3119"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28211"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.4_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9964"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-6147"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9963"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9773"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9958"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14899"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht211850."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13520"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9959"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14360"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14314"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-u"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14356"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24330"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21643"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24331"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21644"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2122"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21642"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2020-0009.html"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2020-0008.html"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht211843."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9979"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9993"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9989"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht211844."
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "162689"
      },
      {
        "db": "PACKETSTORM",
        "id": "160061"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "162877"
      },
      {
        "db": "PACKETSTORM",
        "id": "160701"
      },
      {
        "db": "PACKETSTORM",
        "id": "159227"
      },
      {
        "db": "PACKETSTORM",
        "id": "160062"
      },
      {
        "db": "PACKETSTORM",
        "id": "160064"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "162689"
      },
      {
        "db": "PACKETSTORM",
        "id": "160061"
      },
      {
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "db": "PACKETSTORM",
        "id": "162877"
      },
      {
        "db": "PACKETSTORM",
        "id": "160701"
      },
      {
        "db": "PACKETSTORM",
        "id": "159227"
      },
      {
        "db": "PACKETSTORM",
        "id": "160062"
      },
      {
        "db": "PACKETSTORM",
        "id": "160064"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-10-16T00:00:00",
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "date": "2020-10-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "date": "2021-08-11T16:15:17",
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "date": "2021-05-19T14:18:04",
        "db": "PACKETSTORM",
        "id": "162689"
      },
      {
        "date": "2020-11-13T20:32:22",
        "db": "PACKETSTORM",
        "id": "160061"
      },
      {
        "date": "2021-05-27T13:28:54",
        "db": "PACKETSTORM",
        "id": "162837"
      },
      {
        "date": "2021-06-01T14:45:29",
        "db": "PACKETSTORM",
        "id": "162877"
      },
      {
        "date": "2020-12-24T17:14:56",
        "db": "PACKETSTORM",
        "id": "160701"
      },
      {
        "date": "2020-09-18T19:10:43",
        "db": "PACKETSTORM",
        "id": "159227"
      },
      {
        "date": "2020-11-13T22:22:22",
        "db": "PACKETSTORM",
        "id": "160062"
      },
      {
        "date": "2020-11-14T12:44:44",
        "db": "PACKETSTORM",
        "id": "160064"
      },
      {
        "date": "2020-10-16T17:15:17.887000",
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-07-23T00:00:00",
        "db": "VULHUB",
        "id": "VHN-188076"
      },
      {
        "date": "2020-12-23T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9951"
      },
      {
        "date": "2024-11-21T05:41:35.240000",
        "db": "NVD",
        "id": "CVE-2020-9951"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat Security Advisory 2021-3119-01",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163789"
      }
    ],
    "trust": 0.1
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "overflow, spoof, code execution, xss",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160061"
      },
      {
        "db": "PACKETSTORM",
        "id": "160064"
      }
    ],
    "trust": 0.2
  }
}

VAR-202110-1685

Vulnerability from variot - Updated: 2025-12-22 23:18

This issue was addressed with improved checks. This issue is fixed in Security Update 2021-005 Catalina, iTunes 12.12 for Windows, tvOS 15, iOS 15 and iPadOS 15, watchOS 8. Processing a maliciously crafted image may lead to arbitrary code execution. iOS 15 and iPadOS 15. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15

iOS 15 and iPadOS 15 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212814.

Accessory Manager Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory consumption issue was addressed with improved memory handling. CVE-2021-30837: Siddharth Aeri (@b1n4r1b01)

AppleMobileFileIntegrity Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to read sensitive information Description: This issue was addressed with improved checks. CVE-2021-30811: an anonymous researcher working with Compartir

Apple Neural Engine Available for devices with Apple Neural Engine: iPhone 8 and later, iPad Pro (3rd generation) and later, iPad Air (3rd generation) and later, and iPad mini (5th generation) Impact: A malicious application may be able to execute arbitrary code with system privileges on devices with an Apple Neural Engine Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30838: proteas wang

CoreML Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30825: hjy79425575 working with Trend Micro Zero Day Initiative

Face ID Available for devices with Face ID: iPhone X, iPhone XR, iPhone XS (all models), iPhone 11 (all models), iPhone 12 (all models), iPad Pro (11-inch), and iPad Pro (3rd generation) Impact: A 3D model constructed to look like the enrolled user may be able to authenticate via Face ID Description: This issue was addressed by improving Face ID anti- spoofing models. CVE-2021-30863: Wish Wu (吴潍浠 @wish_wu) of Ant-financial Light-Year Security Lab

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab

ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30835: Ye Zhang of Baidu Security CVE-2021-30847: Mike Zhang of Pangu Lab

Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2021-30857: Zweig of Kunlun Lab

libexpat Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed by updating expat to version 2.4.1. CVE-2013-0340: an anonymous researcher

Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30819: Apple

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to access restricted files Description: A validation issue existed in the handling of symlinks. CVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved state management. CVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Siri Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to view contacts from the lock screen Description: A lock screen issue allowed access to contacts on a locked device. CVE-2021-30815: an anonymous researcher

Telephony Available for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad Air 2, iPad (5th generation), and iPad mini 4 Impact: In certain situations, the baseband would fail to enable integrity and ciphering protection Description: A logic issue was addressed with improved state management. CVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST SysSec Lab

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30846: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30848: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30849: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption vulnerability was addressed with improved locking. CVE-2021-30851: Samuel Groß of Google Project Zero

Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker in physical proximity may be able to force a user onto a malicious Wi-Fi network during device setup Description: An authorization issue was addressed with improved state management. CVE-2021-30810: an anonymous researcher

Additional recognition

Assets We would like to acknowledge Cees Elzinga for their assistance.

Bluetooth We would like to acknowledge an anonymous researcher for their assistance.

File System We would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their assistance.

Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.

UIKit We would like to acknowledge an anonymous researcher for their assistance.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "15.0"

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmFI88sACgkQeC9qKD1p rhjVNxAAx5mEiw7NE47t6rO8Y93xPrNSk+7xcqPeBHUCg+rLmqxtOhITRVUw0QLH sBIwSVK1RxxRjasBfgwbh1jyMYg6EzfKNQOeZGnjLA4WRmIj728aGehJ56Z/ggpJ awJNPvfzyoSdFgdwntj2FDih/d4oJmMwIyhNvqGrSwuybq9sznxJsTbpRbKWFhiz y/phwVywo0A//XBiup3hrscl5SmxO44WBDlL+MAv4FnRPk7IGbBCfO8HnhO/9j1n uSNoKtnJt2f6/06E0wMLStBSESKPSmKSz65AtR6h1oNytEuJtyHME8zOtz/Z9hlJ fo5f6tN6uCgYgYs7dyonWk5qiL7cue3ow58dPjZuNmqxFeu5bCc/1G47LDcdwoAg o0aU22MEs4wKdBWrtFI40BQ5LEmFrh3NM82GwFYDBz92x7B9SpXMau6F82pCvvyG wNbRL5PzHUMa1/LfVcoOyWk0gZBeD8/GFzNmwBeZmBMlpoPRo5bkyjUn8/ABG+Ie 665EHgGYs5BIV20Gviax0g4bjGHqEndfxdjyDHzJ87d65iA5uYdOiw61WpHq9kwH hK4e6BF0CahTrpz6UGVuOA/VMMDv3ZqhW+a95KBzDa7dHVnokpV+WiMJcmoEJ5gI 5ovpqig9qjf7zMv3+3Mlij0anQ24w7+MjEBk7WDG+2al83GCMAE= =nkJd -----END PGP SIGNATURE-----

. CVE-2021-30811: an anonymous researcher working with Compartir

bootp Available for: Apple Watch Series 3 and later Impact: A device may be passively tracked by its WiFi MAC address Description: A user privacy issue was addressed by removing the broadcast MAC address. CVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 25, 2021

WebKit Available for: Apple Watch Series 3 and later Impact: Visiting a maliciously crafted website may reveal a user's browsing history Description: The issue was resolved with additional restrictions on CSS compositing. CVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research Entry added October 25, 2021

WebKit Available for: Apple Watch Series 3 and later Impact: An attacker in a privileged network position may be able to bypass HSTS Description: A logic issue was addressed with improved restrictions.

Alternatively, on your watch, select "My Watch > General > About". Apple is aware of a report that this issue may have been actively exploited. Entry added September 20, 2021

CoreML We would like to acknowledge hjy79425575 working with Trend Micro Zero Day Initiative for their assistance

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202110-1685",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "mac os x",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.15.6"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.0"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.12"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "mac os x",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.15"
      },
      {
        "model": "mac os x",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.15.7"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.6"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164256"
      },
      {
        "db": "PACKETSTORM",
        "id": "164249"
      },
      {
        "db": "PACKETSTORM",
        "id": "164236"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2021-30835",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-30835",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.0,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-390568",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2021-30835",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-30835",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202109-1280",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-390568",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "This issue was addressed with improved checks. This issue is fixed in Security Update 2021-005 Catalina, iTunes 12.12 for Windows, tvOS 15, iOS 15 and iPadOS 15, watchOS 8. Processing a maliciously crafted image may lead to arbitrary code execution. iOS 15 and iPadOS 15. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15\n\niOS 15 and iPadOS 15 addresses the following issues. Information\nabout the security content is also available at\nhttps://support.apple.com/HT212814. \n\nAccessory Manager\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2021-30837: Siddharth Aeri (@b1n4r1b01)\n\nAppleMobileFileIntegrity\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to read sensitive information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30811: an anonymous researcher working with Compartir\n\nApple Neural Engine\nAvailable for devices with Apple Neural Engine: iPhone 8 and later,\niPad Pro (3rd generation) and later, iPad Air (3rd generation) and\nlater, and iPad mini (5th generation) \nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges on devices with an Apple Neural Engine\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30838: proteas wang\n\nCoreML\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30825: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nFace ID\nAvailable for devices with Face ID: iPhone X, iPhone XR, iPhone XS\n(all models), iPhone 11 (all models), iPhone 12 (all models), iPad\nPro (11-inch), and iPad Pro (3rd generation)\nImpact: A 3D model constructed to look like the enrolled user may be\nable to authenticate via Face ID\nDescription: This issue was addressed by improving Face ID anti-\nspoofing models. \nCVE-2021-30863: Wish Wu (\u5434\u6f4d\u6d60 @wish_wu) of Ant-financial Light-Year\nSecurity Lab\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30835: Ye Zhang of Baidu Security\nCVE-2021-30847: Mike Zhang of Pangu Lab\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2021-30857: Zweig of Kunlun Lab\n\nlibexpat\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed by updating expat to version\n2.4.1. \nCVE-2013-0340: an anonymous researcher\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30819: Apple\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to access restricted files\nDescription: A validation issue existed in the handling of symlinks. \nCVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nSiri\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to view contacts from the lock\nscreen\nDescription: A lock screen issue allowed access to contacts on a\nlocked device. \nCVE-2021-30815: an anonymous researcher\n\nTelephony\nAvailable for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad\nAir 2, iPad (5th generation), and iPad mini 4\nImpact: In certain situations, the baseband would fail to enable\nintegrity and ciphering protection\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST\nSysSec Lab\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30846: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30848: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30849: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption vulnerability was addressed with\nimproved locking. \nCVE-2021-30851: Samuel Gro\u00df of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker in physical proximity may be able to force a user\nonto a malicious Wi-Fi network during device setup\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2021-30810: an anonymous researcher\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nFile System\nWe would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"15.0\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmFI88sACgkQeC9qKD1p\nrhjVNxAAx5mEiw7NE47t6rO8Y93xPrNSk+7xcqPeBHUCg+rLmqxtOhITRVUw0QLH\nsBIwSVK1RxxRjasBfgwbh1jyMYg6EzfKNQOeZGnjLA4WRmIj728aGehJ56Z/ggpJ\nawJNPvfzyoSdFgdwntj2FDih/d4oJmMwIyhNvqGrSwuybq9sznxJsTbpRbKWFhiz\ny/phwVywo0A//XBiup3hrscl5SmxO44WBDlL+MAv4FnRPk7IGbBCfO8HnhO/9j1n\nuSNoKtnJt2f6/06E0wMLStBSESKPSmKSz65AtR6h1oNytEuJtyHME8zOtz/Z9hlJ\nfo5f6tN6uCgYgYs7dyonWk5qiL7cue3ow58dPjZuNmqxFeu5bCc/1G47LDcdwoAg\no0aU22MEs4wKdBWrtFI40BQ5LEmFrh3NM82GwFYDBz92x7B9SpXMau6F82pCvvyG\nwNbRL5PzHUMa1/LfVcoOyWk0gZBeD8/GFzNmwBeZmBMlpoPRo5bkyjUn8/ABG+Ie\n665EHgGYs5BIV20Gviax0g4bjGHqEndfxdjyDHzJ87d65iA5uYdOiw61WpHq9kwH\nhK4e6BF0CahTrpz6UGVuOA/VMMDv3ZqhW+a95KBzDa7dHVnokpV+WiMJcmoEJ5gI\n5ovpqig9qjf7zMv3+3Mlij0anQ24w7+MjEBk7WDG+2al83GCMAE=\n=nkJd\n-----END PGP SIGNATURE-----\n\n\n\n. \nCVE-2021-30811: an anonymous researcher working with Compartir\n\nbootp\nAvailable for: Apple Watch Series 3 and later\nImpact: A device may be passively tracked by its WiFi MAC address\nDescription: A user privacy issue was addressed by removing the\nbroadcast MAC address. \nCVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 25, 2021\n\nWebKit\nAvailable for: Apple Watch Series 3 and later\nImpact: Visiting a maliciously crafted website may reveal a user\u0027s\nbrowsing history\nDescription: The issue was resolved with additional restrictions on\nCSS compositing. \nCVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research\nEntry added October 25, 2021\n\nWebKit\nAvailable for: Apple Watch Series 3 and later\nImpact: An attacker in a privileged network position may be able to\nbypass HSTS\nDescription: A logic issue was addressed with improved restrictions. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". Apple is aware of a report that this issue may have\nbeen actively exploited. \nEntry added September 20, 2021\n\nCoreML\nWe would like to acknowledge hjy79425575 working with Trend Micro\nZero Day Initiative for their assistance",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      },
      {
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30835"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164256"
      },
      {
        "db": "PACKETSTORM",
        "id": "164249"
      },
      {
        "db": "PACKETSTORM",
        "id": "164236"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      }
    ],
    "trust": 1.71
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-30835",
        "trust": 2.5
      },
      {
        "db": "PACKETSTORM",
        "id": "164692",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164256",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2021092024",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3158",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3578",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164693",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164689",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-390568",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30835",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164233",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164249",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164236",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164234",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30835"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164256"
      },
      {
        "db": "PACKETSTORM",
        "id": "164249"
      },
      {
        "db": "PACKETSTORM",
        "id": "164236"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "id": "VAR-202110-1685",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390568"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:18:55.560000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Apple iTunes Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=163095"
      },
      {
        "title": "Apple: iOS 15 and iPadOS 15",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=f34e9bd3d1b055a84fc033981719f5fb"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-30835"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://support.apple.com/en-us/ht212815"
      },
      {
        "trust": 2.3,
        "url": "https://support.apple.com/en-us/ht212817"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht212804"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht212953"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2021/oct/62"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2021/oct/63"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2021/oct/61"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212805"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212814"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212819"
      },
      {
        "trust": 1.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30835"
      },
      {
        "trust": 0.7,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30847"
      },
      {
        "trust": 0.7,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30841"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30857"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30843"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30842"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2013-0340"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3578"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3158"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164692/apple-security-advisory-2021-10-26-10.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164256/apple-security-advisory-2021-09-20-10.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212953"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-macos-multiple-vulnerabilities-36461"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021092024"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30810"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30837"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30854"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30855"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30811"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30850"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30852"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30808"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212815."
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30834"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30831"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30866"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30814"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30840"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212819."
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/fulldisclosure/2021/sep/42"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht212814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30815"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212814."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30863"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30838"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/download/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212817."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30830"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30832"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29622"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30828"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212805."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30844"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/downloads/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30859"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30829"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30783"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30713"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30860"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30835"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164256"
      },
      {
        "db": "PACKETSTORM",
        "id": "164249"
      },
      {
        "db": "PACKETSTORM",
        "id": "164236"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30835"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164256"
      },
      {
        "db": "PACKETSTORM",
        "id": "164249"
      },
      {
        "db": "PACKETSTORM",
        "id": "164236"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-10-19T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "date": "2021-09-22T16:22:10",
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "date": "2021-10-28T14:58:57",
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "date": "2021-10-28T14:58:43",
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "date": "2021-09-22T16:40:42",
        "db": "PACKETSTORM",
        "id": "164256"
      },
      {
        "date": "2021-09-22T16:35:10",
        "db": "PACKETSTORM",
        "id": "164249"
      },
      {
        "date": "2021-09-22T16:24:22",
        "db": "PACKETSTORM",
        "id": "164236"
      },
      {
        "date": "2021-09-22T16:22:32",
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "date": "2021-09-20T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      },
      {
        "date": "2021-10-19T14:15:09.210000",
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-02-11T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390568"
      },
      {
        "date": "2022-01-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      },
      {
        "date": "2022-02-11T14:38:44.210000",
        "db": "NVD",
        "id": "CVE-2021-30835"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple iTunes Buffer error vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-1280"
      }
    ],
    "trust": 0.6
  }
}

VAR-202212-1751

Vulnerability from variot - Updated: 2025-12-22 23:15

A type confusion issue was addressed with improved state handling. This issue is fixed in Safari 16.2, tvOS 16.2, macOS Ventura 13.1, iOS 15.7.2 and iPadOS 15.7.2, iOS 16.1.2. Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. Safari , iPadOS , iOS Multiple Apple products have a type mixup vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256


Debian Security Advisory DSA-5308-1 security@debian.org https://www.debian.org/security/ Alberto Garcia December 31, 2022 https://www.debian.org/security/faq


Package : webkit2gtk CVE ID : CVE-2022-42852 CVE-2022-42856 CVE-2022-42867 CVE-2022-46692 CVE-2022-46698 CVE-2022-46699 CVE-2022-46700

The following vulnerabilities have been discovered in the WebKitGTK web engine:

CVE-2022-42852

hazbinhotel discovered that processing maliciously crafted web
content may result in the disclosure of process memory.

For the stable distribution (bullseye), these problems have been fixed in version 2.38.3-1~deb11u1.

We recommend that you upgrade your webkit2gtk packages. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: webkit2gtk3 security update Advisory ID: RHSA-2023:0016-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0016 Issue date: 2023-01-04 CVE Names: CVE-2022-42856 ==================================================================== 1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.36.7-1.el8_7.1.src.rpm

aarch64: webkit2gtk3-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-debugsource-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-devel-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-jsc-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm

ppc64le: webkit2gtk3-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-debugsource-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-devel-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-jsc-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm

s390x: webkit2gtk3-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-debuginfo-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-debugsource-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-devel-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-jsc-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.s390x.rpm

x86_64: webkit2gtk3-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-debuginfo-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-debugsource-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-debugsource-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-devel-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-devel-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-jsc-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-jsc-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-42856 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2022-12-13-2 iOS 15.7.2 and iPadOS 15.7.2

iOS 15.7.2 and iPadOS 15.7.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213531.

AppleAVD Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov

AVEVideoEncoder Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An app may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved checks. CVE-2022-42848: ABC Research s.r.o

File System Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An app may be able to break out of its sandbox Description: This issue was addressed with improved checks. CVE-2022-42861: pattern-f (@pattern_F_) of Ant Security Light-Year Lab

Graphics Driver Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Parsing a maliciously crafted video file may lead to unexpected system termination Description: The issue was addressed with improved memory handling. CVE-2022-42846: Willy R. Vasquez of The University of Texas at Austin

IOHIDFamily Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)

iTunes Store Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An issue existed in the parsing of URLs. CVE-2022-42837: Weijia Dai (@dwj1210) of Momo Security

Kernel Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero

libxml2 Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An integer overflow was addressed through improved input validation. CVE-2022-40303: Maddie Stone of Google Project Zero

libxml2 Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project Zero

ppp Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42840: an anonymous researcher

Preferences Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An app may be able to use arbitrary entitlements Description: A logic issue was addressed with improved state management. CVE-2022-42855: Ivan Fratric of Google Project Zero

Safari Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. CVE-2022-46695: KirtiKumar Anandrao Ramchandani

WebKit Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher

WebKit Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative

WebKit Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani

WebKit Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security

WebKit Available for: iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution. WebKit Bugzilla: 248266 CVE-2022-42856: Clément Lecigne of Google's Threat Analysis Group

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/ iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device. To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About. The version after applying this update will be "iOS 15.7.2 and iPadOS 15.7.2". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFYIACgkQ4RjMIDke Nxn07xAA2gJjgY+Ql7WKtXSxsU4snP+8d2zDgD5hKkZVETJrKG1TGwAKK/YdW0Ug 8nmj/DIU+RRsU8KpGLiN4TGsHVot1GaDwwMQ/HCUG4bHFjknc0TqrTkUCfG6rnkh bFouinoNfyPf7gcmLkQg2AhrNjA/a9QJzfmX2XKtGybIN1kFDd8eMKb/setgVQUS 12h4N6YBI4WjSGvyZ8vagqpMRAz0An4lSEoa21CN1ViM0E4wBWIyU8Ux05fN+Lvn 2gefdjyGV5IP63z3kYe4D/Vt6yatBo1n7ERM9If7IMGO5O8DJqrynTZ3q1kCsxcR QheJHkPZDF/8ogjAdNiOzqybSzhhcNk0uteTSXX1tYGvRos7GyDSTG6/KjuFvB60 Wohwzr16o6VkcZyaU41cA9dWKrv3+RRTm7UR2/CKGngrjcnm4jAotR+pjtQU444v ECGQVTx/qat7Eu+IFe2llm8JcjBHjx1R6Rbb8sqmzD4lVDja/aZ2491vsVdOytq+ cZ59nZqwbG7vo8mBow+zEcoKsh8pAGRYoLW3WU1MetNt04V9d+7Fv7wG/+BNkzHN qOhOa7+4SwD8wvApxdF2+hZgSD26owwkbfG4hnf71w4DSDPpwRC/CoRvhDMZToSl sPLIWP29jIII09N8TNW3PVlIAjquv7oxir7BohtGu5ioSmgI3fQ= =wLMf -----END PGP SIGNATURE-----

. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202305-32


                                       https://security.gentoo.org/

Severity: High Title: WebKitGTK+: Multiple Vulnerabilities Date: May 30, 2023 Bugs: #871732, #879571, #888563, #905346, #905349, #905351 ID: 202305-32


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in arbitrary code execution.

Affected packages

Package Vulnerable Unaffected


net-libs/webkit-gtk < 2.40.1 >= 2.40.1

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebKitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.40.1"

References

[ 1 ] CVE-2022-32885 https://nvd.nist.gov/vuln/detail/CVE-2022-32885 [ 2 ] CVE-2022-32886 https://nvd.nist.gov/vuln/detail/CVE-2022-32886 [ 3 ] CVE-2022-32888 https://nvd.nist.gov/vuln/detail/CVE-2022-32888 [ 4 ] CVE-2022-32891 https://nvd.nist.gov/vuln/detail/CVE-2022-32891 [ 5 ] CVE-2022-32923 https://nvd.nist.gov/vuln/detail/CVE-2022-32923 [ 6 ] CVE-2022-42799 https://nvd.nist.gov/vuln/detail/CVE-2022-42799 [ 7 ] CVE-2022-42823 https://nvd.nist.gov/vuln/detail/CVE-2022-42823 [ 8 ] CVE-2022-42824 https://nvd.nist.gov/vuln/detail/CVE-2022-42824 [ 9 ] CVE-2022-42826 https://nvd.nist.gov/vuln/detail/CVE-2022-42826 [ 10 ] CVE-2022-42852 https://nvd.nist.gov/vuln/detail/CVE-2022-42852 [ 11 ] CVE-2022-42856 https://nvd.nist.gov/vuln/detail/CVE-2022-42856 [ 12 ] CVE-2022-42863 https://nvd.nist.gov/vuln/detail/CVE-2022-42863 [ 13 ] CVE-2022-42867 https://nvd.nist.gov/vuln/detail/CVE-2022-42867 [ 14 ] CVE-2022-46691 https://nvd.nist.gov/vuln/detail/CVE-2022-46691 [ 15 ] CVE-2022-46692 https://nvd.nist.gov/vuln/detail/CVE-2022-46692 [ 16 ] CVE-2022-46698 https://nvd.nist.gov/vuln/detail/CVE-2022-46698 [ 17 ] CVE-2022-46699 https://nvd.nist.gov/vuln/detail/CVE-2022-46699 [ 18 ] CVE-2022-46700 https://nvd.nist.gov/vuln/detail/CVE-2022-46700 [ 19 ] CVE-2023-23517 https://nvd.nist.gov/vuln/detail/CVE-2023-23517 [ 20 ] CVE-2023-23518 https://nvd.nist.gov/vuln/detail/CVE-2023-23518 [ 21 ] CVE-2023-23529 https://nvd.nist.gov/vuln/detail/CVE-2023-23529 [ 22 ] CVE-2023-25358 https://nvd.nist.gov/vuln/detail/CVE-2023-25358 [ 23 ] CVE-2023-25360 https://nvd.nist.gov/vuln/detail/CVE-2023-25360 [ 24 ] CVE-2023-25361 https://nvd.nist.gov/vuln/detail/CVE-2023-25361 [ 25 ] CVE-2023-25362 https://nvd.nist.gov/vuln/detail/CVE-2023-25362 [ 26 ] CVE-2023-25363 https://nvd.nist.gov/vuln/detail/CVE-2023-25363 [ 27 ] CVE-2023-27932 https://nvd.nist.gov/vuln/detail/CVE-2023-27932 [ 28 ] CVE-2023-27954 https://nvd.nist.gov/vuln/detail/CVE-2023-27954 [ 29 ] CVE-2023-28205 https://nvd.nist.gov/vuln/detail/CVE-2023-28205 [ 30 ] WSA-2022-0009 https://webkitgtk.org/security/WSA-2022-0009.html [ 31 ] WSA-2022-0010 https://webkitgtk.org/security/WSA-2022-0010.html [ 32 ] WSA-2023-0001 https://webkitgtk.org/security/WSA-2023-0001.html [ 33 ] WSA-2023-0002 https://webkitgtk.org/security/WSA-2023-0002.html [ 34 ] WSA-2023-0003 https://webkitgtk.org/security/WSA-2023-0003.html

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202305-32

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202212-1751",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.1.2"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.7.2"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "iphone os",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.7.2"
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": "16.2"
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "170374"
      },
      {
        "db": "PACKETSTORM",
        "id": "170367"
      }
    ],
    "trust": 0.2
  },
  "cve": "CVE-2022-42856",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-42856",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-42856",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-42856",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2022-42856",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-42856",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202212-3232",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A type confusion issue was addressed with improved state handling. This issue is fixed in Safari 16.2, tvOS 16.2, macOS Ventura 13.1, iOS 15.7.2 and iPadOS 15.7.2, iOS 16.1.2. Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. Safari , iPadOS , iOS Multiple Apple products have a type mixup vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5308-1                   security@debian.org\nhttps://www.debian.org/security/                           Alberto Garcia\nDecember 31, 2022                     https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : webkit2gtk\nCVE ID         : CVE-2022-42852 CVE-2022-42856 CVE-2022-42867 CVE-2022-46692\n                 CVE-2022-46698 CVE-2022-46699 CVE-2022-46700\n\nThe following vulnerabilities have been discovered in the WebKitGTK\nweb engine:\n\nCVE-2022-42852\n\n    hazbinhotel discovered that processing maliciously crafted web\n    content may result in the disclosure of process memory. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.38.3-1~deb11u1. \n\nWe recommend that you upgrade your webkit2gtk packages. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: webkit2gtk3 security update\nAdvisory ID:       RHSA-2023:0016-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2023:0016\nIssue date:        2023-01-04\nCVE Names:         CVE-2022-42856\n====================================================================\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.36.7-1.el8_7.1.src.rpm\n\naarch64:\nwebkit2gtk3-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-debugsource-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-devel-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-jsc-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-debugsource-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-devel-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-jsc-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-debuginfo-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-debugsource-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-devel-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-jsc-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-debuginfo-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-debugsource-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-debugsource-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-devel-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-devel-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-jsc-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-jsc-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-jsc-devel-2.36.7-1.el8_7.1.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.36.7-1.el8_7.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-42856\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-2 iOS 15.7.2 and iPadOS 15.7.2\n\niOS 15.7.2 and iPadOS 15.7.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213531. \n\nAppleAVD\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAVEVideoEncoder\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-42848: ABC Research s.r.o\n\nFile System\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: An app may be able to break out of its sandbox\nDescription: This issue was addressed with improved checks. \nCVE-2022-42861: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nGraphics Driver\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Parsing a maliciously crafted video file may lead to\nunexpected system termination\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42846: Willy R. Vasquez of The University of Texas at Austin\n\nIOHIDFamily\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\niTunes Store\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An issue existed in the parsing of URLs. \nCVE-2022-42837: Weijia Dai (@dwj1210) of Momo Security\n\nKernel\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nlibxml2\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2022-40303: Maddie Stone of Google Project Zero\n\nlibxml2\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2022-40304: Ned Williamson and Nathan Wachholz of Google Project\nZero\n\nppp\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42840: an anonymous researcher\n\nPreferences\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: An app may be able to use arbitrary entitlements\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-42855: Ivan Fratric of Google Project Zero\n\nSafari\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: iPhone 6s (all models), iPhone 7 (all models), iPhone\nSE (1st generation), iPad Pro (all models), iPad Air 2 and later,\niPad 5th generation and later, iPad mini 4 and later, and iPod touch\n(7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution. \nWebKit Bugzilla: 248266\nCVE-2022-42856: Cl\u00e9ment Lecigne of Google\u0027s Threat Analysis Group\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/  iTunes and Software Update on the\ndevice will automatically check Apple\u0027s update server on its weekly\nschedule. When an update is detected, it is downloaded and the option\nto be installed is presented to the user when the iOS device is\ndocked. We recommend applying the update immediately if possible. \nSelecting Don\u0027t Install will present the option the next time you\nconnect your iOS device.  The automatic update process may take up to\na week depending on the day that iTunes or the device checks for\nupdates. You may manually obtain the update via the Check for Updates\nbutton within iTunes, or the Software Update on your device.  To\ncheck that the iPhone, iPod touch, or iPad has been updated:  *\nNavigate to Settings * Select General * Select About. The version\nafter applying this update will be \"iOS 15.7.2 and iPadOS 15.7.2\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFYIACgkQ4RjMIDke\nNxn07xAA2gJjgY+Ql7WKtXSxsU4snP+8d2zDgD5hKkZVETJrKG1TGwAKK/YdW0Ug\n8nmj/DIU+RRsU8KpGLiN4TGsHVot1GaDwwMQ/HCUG4bHFjknc0TqrTkUCfG6rnkh\nbFouinoNfyPf7gcmLkQg2AhrNjA/a9QJzfmX2XKtGybIN1kFDd8eMKb/setgVQUS\n12h4N6YBI4WjSGvyZ8vagqpMRAz0An4lSEoa21CN1ViM0E4wBWIyU8Ux05fN+Lvn\n2gefdjyGV5IP63z3kYe4D/Vt6yatBo1n7ERM9If7IMGO5O8DJqrynTZ3q1kCsxcR\nQheJHkPZDF/8ogjAdNiOzqybSzhhcNk0uteTSXX1tYGvRos7GyDSTG6/KjuFvB60\nWohwzr16o6VkcZyaU41cA9dWKrv3+RRTm7UR2/CKGngrjcnm4jAotR+pjtQU444v\nECGQVTx/qat7Eu+IFe2llm8JcjBHjx1R6Rbb8sqmzD4lVDja/aZ2491vsVdOytq+\ncZ59nZqwbG7vo8mBow+zEcoKsh8pAGRYoLW3WU1MetNt04V9d+7Fv7wG/+BNkzHN\nqOhOa7+4SwD8wvApxdF2+hZgSD26owwkbfG4hnf71w4DSDPpwRC/CoRvhDMZToSl\nsPLIWP29jIII09N8TNW3PVlIAjquv7oxir7BohtGu5ioSmgI3fQ=\n=wLMf\n-----END PGP SIGNATURE-----\n\n\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202305-32\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebKitGTK+: Multiple Vulnerabilities\n     Date: May 30, 2023\n     Bugs: #871732, #879571, #888563, #905346, #905349, #905351\n       ID: 202305-32\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in arbitrary code execution. \n\nAffected packages\n================\nPackage              Vulnerable    Unaffected\n-------------------  ------------  ------------\nnet-libs/webkit-gtk  \u003c 2.40.1      \u003e= 2.40.1\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebKitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.40.1\"\n\nReferences\n=========\n[ 1 ] CVE-2022-32885\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32885\n[ 2 ] CVE-2022-32886\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32886\n[ 3 ] CVE-2022-32888\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32888\n[ 4 ] CVE-2022-32891\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32891\n[ 5 ] CVE-2022-32923\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32923\n[ 6 ] CVE-2022-42799\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42799\n[ 7 ] CVE-2022-42823\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42823\n[ 8 ] CVE-2022-42824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42824\n[ 9 ] CVE-2022-42826\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42826\n[ 10 ] CVE-2022-42852\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42852\n[ 11 ] CVE-2022-42856\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42856\n[ 12 ] CVE-2022-42863\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42863\n[ 13 ] CVE-2022-42867\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42867\n[ 14 ] CVE-2022-46691\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46691\n[ 15 ] CVE-2022-46692\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46692\n[ 16 ] CVE-2022-46698\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46698\n[ 17 ] CVE-2022-46699\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46699\n[ 18 ] CVE-2022-46700\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46700\n[ 19 ] CVE-2023-23517\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23517\n[ 20 ] CVE-2023-23518\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23518\n[ 21 ] CVE-2023-23529\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23529\n[ 22 ] CVE-2023-25358\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25358\n[ 23 ] CVE-2023-25360\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25360\n[ 24 ] CVE-2023-25361\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25361\n[ 25 ] CVE-2023-25362\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25362\n[ 26 ] CVE-2023-25363\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25363\n[ 27 ] CVE-2023-27932\n      https://nvd.nist.gov/vuln/detail/CVE-2023-27932\n[ 28 ] CVE-2023-27954\n      https://nvd.nist.gov/vuln/detail/CVE-2023-27954\n[ 29 ] CVE-2023-28205\n      https://nvd.nist.gov/vuln/detail/CVE-2023-28205\n[ 30 ] WSA-2022-0009\n      https://webkitgtk.org/security/WSA-2022-0009.html\n[ 31 ] WSA-2022-0010\n      https://webkitgtk.org/security/WSA-2022-0010.html\n[ 32 ] WSA-2023-0001\n      https://webkitgtk.org/security/WSA-2023-0001.html\n[ 33 ] WSA-2023-0002\n      https://webkitgtk.org/security/WSA-2023-0002.html\n[ 34 ] WSA-2023-0003\n      https://webkitgtk.org/security/WSA-2023-0003.html\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-32\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "db": "PACKETSTORM",
        "id": "170374"
      },
      {
        "db": "PACKETSTORM",
        "id": "170350"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170367"
      },
      {
        "db": "PACKETSTORM",
        "id": "170313"
      },
      {
        "db": "PACKETSTORM",
        "id": "170312"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      }
    ],
    "trust": 2.34
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-439663",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-42856",
        "trust": 4.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2022/12/26/1",
        "trust": 2.5
      },
      {
        "db": "PACKETSTORM",
        "id": "170350",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "170367",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "170374",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "170319",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0118",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0074",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0058",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "170695",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "170313",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170349",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170312",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170317",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-439663",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172625",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "db": "PACKETSTORM",
        "id": "170374"
      },
      {
        "db": "PACKETSTORM",
        "id": "170350"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170367"
      },
      {
        "db": "PACKETSTORM",
        "id": "170313"
      },
      {
        "db": "PACKETSTORM",
        "id": "170312"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "id": "VAR-202212-1751",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:15:17.574000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT213535 Apple\u00a0 Security update",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT213516"
      },
      {
        "title": "Apple iOS Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=217555"
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-843",
        "trust": 1.1
      },
      {
        "problemtype": "Mistake of type (CWE-843) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/21"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/22"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/23"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/26"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/28"
      },
      {
        "trust": 2.5,
        "url": "http://www.openwall.com/lists/oss-security/2022/12/26/1"
      },
      {
        "trust": 2.5,
        "url": "https://security.gentoo.org/glsa/202305-32"
      },
      {
        "trust": 2.3,
        "url": "https://support.apple.com/en-us/ht213537"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213516"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213531"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213532"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213535"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42856"
      },
      {
        "trust": 1.0,
        "url": "https://www.cisa.gov/known-exploited-vulnerabilities-catalog?field_cve=cve-2022-42856"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-42856"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/known-exploited-vulnerabilities-catalog"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170374/red-hat-security-advisory-2023-0021-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0074"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170367/red-hat-security-advisory-2023-0016-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170319/apple-security-advisory-2022-12-13-9.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht213597"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170695/apple-security-advisory-2023-01-23-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0118"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0058"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-42856/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170350/debian-security-advisory-5309-1.html"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46692"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46698"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46700"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46699"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.2,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.2,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.2,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/en-us/ht201222."
      },
      {
        "trust": 0.2,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0021"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/wpewebkit"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0016"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213516."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42861"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42848"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42855"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213531."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46689"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42837"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32891"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0010.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0001.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32888"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42799"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23517"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0009.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42826"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0003.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32885"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27932"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42823"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25361"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25360"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-28205"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "db": "PACKETSTORM",
        "id": "170374"
      },
      {
        "db": "PACKETSTORM",
        "id": "170350"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170367"
      },
      {
        "db": "PACKETSTORM",
        "id": "170313"
      },
      {
        "db": "PACKETSTORM",
        "id": "170312"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "db": "PACKETSTORM",
        "id": "170374"
      },
      {
        "db": "PACKETSTORM",
        "id": "170350"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170367"
      },
      {
        "db": "PACKETSTORM",
        "id": "170313"
      },
      {
        "db": "PACKETSTORM",
        "id": "170312"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-12-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "date": "2023-01-05T15:27:19",
        "db": "PACKETSTORM",
        "id": "170374"
      },
      {
        "date": "2023-01-02T14:20:15",
        "db": "PACKETSTORM",
        "id": "170350"
      },
      {
        "date": "2023-01-02T14:19:00",
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "date": "2023-01-04T14:30:38",
        "db": "PACKETSTORM",
        "id": "170367"
      },
      {
        "date": "2022-12-22T02:11:27",
        "db": "PACKETSTORM",
        "id": "170313"
      },
      {
        "date": "2022-12-22T02:11:02",
        "db": "PACKETSTORM",
        "id": "170312"
      },
      {
        "date": "2023-05-30T16:32:33",
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "date": "2022-12-13T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "date": "2023-11-30T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "date": "2022-12-15T19:15:25.123000",
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-439663"
      },
      {
        "date": "2023-05-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      },
      {
        "date": "2023-11-30T03:04:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      },
      {
        "date": "2025-10-23T18:02:34.043000",
        "db": "NVD",
        "id": "CVE-2022-42856"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Mistype vulnerability in multiple Apple products",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023819"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3232"
      }
    ],
    "trust": 0.6
  }
}

VAR-202109-1315

Vulnerability from variot - Updated: 2025-12-22 23:11

A memory corruption issue was addressed with improved state management. This issue is fixed in watchOS 7.4.1, iOS 14.5.1 and iPadOS 14.5.1, tvOS 14.6, iOS 12.5.3, macOS Big Sur 11.3.1. Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.. plural Apple The product contains a vulnerability related to out-of-bounds writes.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Apple Safari is a web browser of Apple (Apple), the default browser included with Mac OS X and iOS operating systems. A buffer error vulnerability exists in Apple Safari that stems from a bounds error in WebKit. A remote attacker could exploit this vulnerability to execute arbitrary code on the target system. The following products and versions are affected: Apple Safari: 14.0, 14.0.1, 14.0.2, 14.0.3, 14.0.3-14610.4.3.1.7, 14.0.3-15610.4.3.1.7, 14.1. APPLE-SA-2021-05-03-3 watchOS 7.4.1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: GNOME security, bug fix, and enhancement update Advisory ID: RHSA-2021:4381-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:4381 Issue date: 2021-11-09 CVE Names: CVE-2020-13558 CVE-2020-24870 CVE-2020-27918 CVE-2020-29623 CVE-2020-36241 CVE-2021-1765 CVE-2021-1788 CVE-2021-1789 CVE-2021-1799 CVE-2021-1801 CVE-2021-1844 CVE-2021-1870 CVE-2021-1871 CVE-2021-21775 CVE-2021-21779 CVE-2021-21806 CVE-2021-28650 CVE-2021-30663 CVE-2021-30665 CVE-2021-30682 CVE-2021-30689 CVE-2021-30720 CVE-2021-30734 CVE-2021-30744 CVE-2021-30749 CVE-2021-30758 CVE-2021-30795 CVE-2021-30797 CVE-2021-30799 ==================================================================== 1. Summary:

An update for GNOME is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

GNOME is the default desktop environment of Red Hat Enterprise Linux.

The following packages have been upgraded to a later upstream version: gdm (40.0), webkit2gtk3 (2.32.3).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

GDM must be restarted for this update to take effect. The GNOME session must be restarted (log out, then log back in) for this update to take effect.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1651378 - [RFE] Provide a mechanism for persistently showing the security level of a machine at login time 1770302 - disable show text in GDM login/lock screen (patched in RHEL 7.8) 1791478 - Cannot completely disable odrs (Gnome Ratings) from the Software application in Gnome Desktop 1813727 - Files copied from NFS4 to Desktop can't be opened 1854679 - [RFE] Disable left edge gesture 1873297 - Gnome-software coredumps when run as root in terminal 1873488 - GTK3 prints errors with overlay scrollbar disabled 1888404 - Updates page hides ongoing updates on refresh 1894613 - [RFE] Re-inclusion of workspace renaming in GNOME 3. 1897932 - JS ERROR: Error: Extension point conflict: there is already a status indicator for role ... 1904139 - Automatic Logout Feature not working 1905000 - Desktop refresh broken after unlock 1909300 - gdm isn't killing the login screen on login after all, should rebase to latest release 1914925 - RFE: add patch to set grub boot_success flag on shutdown/reboot 1924725 - [Wayland] Double-touch desktop icons fails sometimes 1925640 - CVE-2020-36241 gnome-autoar: Directory traversal via directory symbolic links pointing outside of the destination directory 1928794 - CVE-2020-24870 LibRaw: Stack buffer overflow in LibRaw::identify_process_dng_fields() in identify.cpp 1928886 - CVE-2020-13558 webkitgtk: Use-after-free in AudioSourceProviderGStreamer leading to arbitrary code execution 1935261 - [RFE] Enable connecting to WiFI and VPN connections at the GDM login 1937416 - Rebase WebKitGTK to 2.32 1937866 - Unable to disable onscreen keyboard in touch screen machine [rhel-8.5.0] 1938937 - Mutter: mouse click doesn't work when using 10-bit graphic monitor [rhel-8.5.0] 1940026 - CVE-2021-28650 gnome-autoar: Directory traversal via directory symbolic links pointing outside of the destination directory (incomplete CVE-2020-36241 fix) 1944323 - CVE-2020-27918 webkitgtk: Use-after-free leading to arbitrary code execution 1944329 - CVE-2020-29623 webkitgtk: User may be unable to fully delete browsing history 1944333 - CVE-2021-1765 webkitgtk: IFrame sandboxing policy violation 1944337 - CVE-2021-1789 webkitgtk: Type confusion issue leading to arbitrary code execution 1944340 - CVE-2021-1799 webkitgtk: Access to restricted ports on arbitrary servers via port redirection 1944343 - CVE-2021-1801 webkitgtk: IFrame sandboxing policy violation 1944350 - CVE-2021-1870 webkitgtk: Logic issue leading to arbitrary code execution 1944859 - CVE-2021-1788 webkitgtk: Use-after-free leading to arbitrary code execution 1944862 - CVE-2021-1844 webkitgtk: Memory corruption issue leading to arbitrary code execution 1944867 - CVE-2021-1871 webkitgtk: Logic issue leading to arbitrary code execution 1949176 - GNOME Shell on Wayland does not generate xauth data, needed for X forwarding over SSH 1951086 - Disable the Facebook provider 1952136 - Disable the Foursquare provider 1955754 - gnome-session kiosk-session support still isn't up to muster 1957705 - RFE: make gnome-calculator internet access attemps configurable system-wide 1960705 - Vino nonfunctional in FIPS mode 1962049 - [Hyper-V][RHEL8.5]gdm: Guest with 1 vcpu start GUI failed on Hyper-V 1971507 - gnome-shell JS ERROR Error calling onComplete: TypeError this._dialog.actor is undefined _hideLockScreenComplete updateTweens 1971534 - gnome-shell[2343]: gsignal.c:2642: instance '0x5583c61f9280' has no handler with id '23831' 1972545 - flatpak: Prefer runtime from the same origin as the application 1978287 - gnome-shell to include / Documented - PolicyKit-authentication-agent 1978505 - Gnome Software development package is missing important header files. 1978612 - pt_BR translations for "Register System" panel 1980441 - CVE-2021-21806 webkitgtk: Use-after-free in fireEventListeners leading to arbitrary code execution 1980661 - "Screen Lock disabled" notification appears on first login after disabling gdm and notification pop-up. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: LibRaw-0.19.5-3.el8.src.rpm accountsservice-0.6.55-2.el8.src.rpm gdm-40.0-15.el8.src.rpm gnome-autoar-0.2.3-2.el8.src.rpm gnome-calculator-3.28.2-2.el8.src.rpm gnome-control-center-3.28.2-28.el8.src.rpm gnome-online-accounts-3.28.2-3.el8.src.rpm gnome-session-3.28.1-13.el8.src.rpm gnome-settings-daemon-3.32.0-16.el8.src.rpm gnome-shell-3.32.2-40.el8.src.rpm gnome-shell-extensions-3.32.1-20.el8.src.rpm gnome-software-3.36.1-10.el8.src.rpm gtk3-3.22.30-8.el8.src.rpm mutter-3.32.2-60.el8.src.rpm vino-3.22.0-11.el8.src.rpm webkit2gtk3-2.32.3-2.el8.src.rpm

aarch64: accountsservice-0.6.55-2.el8.aarch64.rpm accountsservice-debuginfo-0.6.55-2.el8.aarch64.rpm accountsservice-debugsource-0.6.55-2.el8.aarch64.rpm accountsservice-libs-0.6.55-2.el8.aarch64.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.aarch64.rpm gdm-40.0-15.el8.aarch64.rpm gdm-debuginfo-40.0-15.el8.aarch64.rpm gdm-debugsource-40.0-15.el8.aarch64.rpm gnome-autoar-0.2.3-2.el8.aarch64.rpm gnome-autoar-debuginfo-0.2.3-2.el8.aarch64.rpm gnome-autoar-debugsource-0.2.3-2.el8.aarch64.rpm gnome-calculator-3.28.2-2.el8.aarch64.rpm gnome-calculator-debuginfo-3.28.2-2.el8.aarch64.rpm gnome-calculator-debugsource-3.28.2-2.el8.aarch64.rpm gnome-control-center-3.28.2-28.el8.aarch64.rpm gnome-control-center-debuginfo-3.28.2-28.el8.aarch64.rpm gnome-control-center-debugsource-3.28.2-28.el8.aarch64.rpm gnome-online-accounts-3.28.2-3.el8.aarch64.rpm gnome-online-accounts-debuginfo-3.28.2-3.el8.aarch64.rpm gnome-online-accounts-debugsource-3.28.2-3.el8.aarch64.rpm gnome-online-accounts-devel-3.28.2-3.el8.aarch64.rpm gnome-session-3.28.1-13.el8.aarch64.rpm gnome-session-debuginfo-3.28.1-13.el8.aarch64.rpm gnome-session-debugsource-3.28.1-13.el8.aarch64.rpm gnome-session-kiosk-session-3.28.1-13.el8.aarch64.rpm gnome-session-wayland-session-3.28.1-13.el8.aarch64.rpm gnome-session-xsession-3.28.1-13.el8.aarch64.rpm gnome-settings-daemon-3.32.0-16.el8.aarch64.rpm gnome-settings-daemon-debuginfo-3.32.0-16.el8.aarch64.rpm gnome-settings-daemon-debugsource-3.32.0-16.el8.aarch64.rpm gnome-shell-3.32.2-40.el8.aarch64.rpm gnome-shell-debuginfo-3.32.2-40.el8.aarch64.rpm gnome-shell-debugsource-3.32.2-40.el8.aarch64.rpm gnome-software-3.36.1-10.el8.aarch64.rpm gnome-software-debuginfo-3.36.1-10.el8.aarch64.rpm gnome-software-debugsource-3.36.1-10.el8.aarch64.rpm gsettings-desktop-schemas-devel-3.32.0-6.el8.aarch64.rpm gtk-update-icon-cache-3.22.30-8.el8.aarch64.rpm gtk-update-icon-cache-debuginfo-3.22.30-8.el8.aarch64.rpm gtk3-3.22.30-8.el8.aarch64.rpm gtk3-debuginfo-3.22.30-8.el8.aarch64.rpm gtk3-debugsource-3.22.30-8.el8.aarch64.rpm gtk3-devel-3.22.30-8.el8.aarch64.rpm gtk3-devel-debuginfo-3.22.30-8.el8.aarch64.rpm gtk3-immodule-xim-3.22.30-8.el8.aarch64.rpm gtk3-immodule-xim-debuginfo-3.22.30-8.el8.aarch64.rpm gtk3-immodules-debuginfo-3.22.30-8.el8.aarch64.rpm gtk3-tests-debuginfo-3.22.30-8.el8.aarch64.rpm mutter-3.32.2-60.el8.aarch64.rpm mutter-debuginfo-3.32.2-60.el8.aarch64.rpm mutter-debugsource-3.32.2-60.el8.aarch64.rpm mutter-tests-debuginfo-3.32.2-60.el8.aarch64.rpm vino-3.22.0-11.el8.aarch64.rpm vino-debuginfo-3.22.0-11.el8.aarch64.rpm vino-debugsource-3.22.0-11.el8.aarch64.rpm webkit2gtk3-2.32.3-2.el8.aarch64.rpm webkit2gtk3-debuginfo-2.32.3-2.el8.aarch64.rpm webkit2gtk3-debugsource-2.32.3-2.el8.aarch64.rpm webkit2gtk3-devel-2.32.3-2.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.32.3-2.el8.aarch64.rpm webkit2gtk3-jsc-2.32.3-2.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.32.3-2.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.32.3-2.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.aarch64.rpm

noarch: gnome-classic-session-3.32.1-20.el8.noarch.rpm gnome-control-center-filesystem-3.28.2-28.el8.noarch.rpm gnome-shell-extension-apps-menu-3.32.1-20.el8.noarch.rpm gnome-shell-extension-auto-move-windows-3.32.1-20.el8.noarch.rpm gnome-shell-extension-common-3.32.1-20.el8.noarch.rpm gnome-shell-extension-dash-to-dock-3.32.1-20.el8.noarch.rpm gnome-shell-extension-desktop-icons-3.32.1-20.el8.noarch.rpm gnome-shell-extension-disable-screenshield-3.32.1-20.el8.noarch.rpm gnome-shell-extension-drive-menu-3.32.1-20.el8.noarch.rpm gnome-shell-extension-gesture-inhibitor-3.32.1-20.el8.noarch.rpm gnome-shell-extension-horizontal-workspaces-3.32.1-20.el8.noarch.rpm gnome-shell-extension-launch-new-instance-3.32.1-20.el8.noarch.rpm gnome-shell-extension-native-window-placement-3.32.1-20.el8.noarch.rpm gnome-shell-extension-no-hot-corner-3.32.1-20.el8.noarch.rpm gnome-shell-extension-panel-favorites-3.32.1-20.el8.noarch.rpm gnome-shell-extension-places-menu-3.32.1-20.el8.noarch.rpm gnome-shell-extension-screenshot-window-sizer-3.32.1-20.el8.noarch.rpm gnome-shell-extension-systemMonitor-3.32.1-20.el8.noarch.rpm gnome-shell-extension-top-icons-3.32.1-20.el8.noarch.rpm gnome-shell-extension-updates-dialog-3.32.1-20.el8.noarch.rpm gnome-shell-extension-user-theme-3.32.1-20.el8.noarch.rpm gnome-shell-extension-window-grouper-3.32.1-20.el8.noarch.rpm gnome-shell-extension-window-list-3.32.1-20.el8.noarch.rpm gnome-shell-extension-windowsNavigator-3.32.1-20.el8.noarch.rpm gnome-shell-extension-workspace-indicator-3.32.1-20.el8.noarch.rpm

ppc64le: LibRaw-0.19.5-3.el8.ppc64le.rpm LibRaw-debuginfo-0.19.5-3.el8.ppc64le.rpm LibRaw-debugsource-0.19.5-3.el8.ppc64le.rpm LibRaw-samples-debuginfo-0.19.5-3.el8.ppc64le.rpm accountsservice-0.6.55-2.el8.ppc64le.rpm accountsservice-debuginfo-0.6.55-2.el8.ppc64le.rpm accountsservice-debugsource-0.6.55-2.el8.ppc64le.rpm accountsservice-libs-0.6.55-2.el8.ppc64le.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.ppc64le.rpm gdm-40.0-15.el8.ppc64le.rpm gdm-debuginfo-40.0-15.el8.ppc64le.rpm gdm-debugsource-40.0-15.el8.ppc64le.rpm gnome-autoar-0.2.3-2.el8.ppc64le.rpm gnome-autoar-debuginfo-0.2.3-2.el8.ppc64le.rpm gnome-autoar-debugsource-0.2.3-2.el8.ppc64le.rpm gnome-calculator-3.28.2-2.el8.ppc64le.rpm gnome-calculator-debuginfo-3.28.2-2.el8.ppc64le.rpm gnome-calculator-debugsource-3.28.2-2.el8.ppc64le.rpm gnome-control-center-3.28.2-28.el8.ppc64le.rpm gnome-control-center-debuginfo-3.28.2-28.el8.ppc64le.rpm gnome-control-center-debugsource-3.28.2-28.el8.ppc64le.rpm gnome-online-accounts-3.28.2-3.el8.ppc64le.rpm gnome-online-accounts-debuginfo-3.28.2-3.el8.ppc64le.rpm gnome-online-accounts-debugsource-3.28.2-3.el8.ppc64le.rpm gnome-online-accounts-devel-3.28.2-3.el8.ppc64le.rpm gnome-session-3.28.1-13.el8.ppc64le.rpm gnome-session-debuginfo-3.28.1-13.el8.ppc64le.rpm gnome-session-debugsource-3.28.1-13.el8.ppc64le.rpm gnome-session-kiosk-session-3.28.1-13.el8.ppc64le.rpm gnome-session-wayland-session-3.28.1-13.el8.ppc64le.rpm gnome-session-xsession-3.28.1-13.el8.ppc64le.rpm gnome-settings-daemon-3.32.0-16.el8.ppc64le.rpm gnome-settings-daemon-debuginfo-3.32.0-16.el8.ppc64le.rpm gnome-settings-daemon-debugsource-3.32.0-16.el8.ppc64le.rpm gnome-shell-3.32.2-40.el8.ppc64le.rpm gnome-shell-debuginfo-3.32.2-40.el8.ppc64le.rpm gnome-shell-debugsource-3.32.2-40.el8.ppc64le.rpm gnome-software-3.36.1-10.el8.ppc64le.rpm gnome-software-debuginfo-3.36.1-10.el8.ppc64le.rpm gnome-software-debugsource-3.36.1-10.el8.ppc64le.rpm gsettings-desktop-schemas-devel-3.32.0-6.el8.ppc64le.rpm gtk-update-icon-cache-3.22.30-8.el8.ppc64le.rpm gtk-update-icon-cache-debuginfo-3.22.30-8.el8.ppc64le.rpm gtk3-3.22.30-8.el8.ppc64le.rpm gtk3-debuginfo-3.22.30-8.el8.ppc64le.rpm gtk3-debugsource-3.22.30-8.el8.ppc64le.rpm gtk3-devel-3.22.30-8.el8.ppc64le.rpm gtk3-devel-debuginfo-3.22.30-8.el8.ppc64le.rpm gtk3-immodule-xim-3.22.30-8.el8.ppc64le.rpm gtk3-immodule-xim-debuginfo-3.22.30-8.el8.ppc64le.rpm gtk3-immodules-debuginfo-3.22.30-8.el8.ppc64le.rpm gtk3-tests-debuginfo-3.22.30-8.el8.ppc64le.rpm mutter-3.32.2-60.el8.ppc64le.rpm mutter-debuginfo-3.32.2-60.el8.ppc64le.rpm mutter-debugsource-3.32.2-60.el8.ppc64le.rpm mutter-tests-debuginfo-3.32.2-60.el8.ppc64le.rpm vino-3.22.0-11.el8.ppc64le.rpm vino-debuginfo-3.22.0-11.el8.ppc64le.rpm vino-debugsource-3.22.0-11.el8.ppc64le.rpm webkit2gtk3-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-debugsource-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-devel-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-jsc-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.32.3-2.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.ppc64le.rpm

s390x: accountsservice-0.6.55-2.el8.s390x.rpm accountsservice-debuginfo-0.6.55-2.el8.s390x.rpm accountsservice-debugsource-0.6.55-2.el8.s390x.rpm accountsservice-libs-0.6.55-2.el8.s390x.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.s390x.rpm gdm-40.0-15.el8.s390x.rpm gdm-debuginfo-40.0-15.el8.s390x.rpm gdm-debugsource-40.0-15.el8.s390x.rpm gnome-autoar-0.2.3-2.el8.s390x.rpm gnome-autoar-debuginfo-0.2.3-2.el8.s390x.rpm gnome-autoar-debugsource-0.2.3-2.el8.s390x.rpm gnome-calculator-3.28.2-2.el8.s390x.rpm gnome-calculator-debuginfo-3.28.2-2.el8.s390x.rpm gnome-calculator-debugsource-3.28.2-2.el8.s390x.rpm gnome-control-center-3.28.2-28.el8.s390x.rpm gnome-control-center-debuginfo-3.28.2-28.el8.s390x.rpm gnome-control-center-debugsource-3.28.2-28.el8.s390x.rpm gnome-online-accounts-3.28.2-3.el8.s390x.rpm gnome-online-accounts-debuginfo-3.28.2-3.el8.s390x.rpm gnome-online-accounts-debugsource-3.28.2-3.el8.s390x.rpm gnome-online-accounts-devel-3.28.2-3.el8.s390x.rpm gnome-session-3.28.1-13.el8.s390x.rpm gnome-session-debuginfo-3.28.1-13.el8.s390x.rpm gnome-session-debugsource-3.28.1-13.el8.s390x.rpm gnome-session-kiosk-session-3.28.1-13.el8.s390x.rpm gnome-session-wayland-session-3.28.1-13.el8.s390x.rpm gnome-session-xsession-3.28.1-13.el8.s390x.rpm gnome-settings-daemon-3.32.0-16.el8.s390x.rpm gnome-settings-daemon-debuginfo-3.32.0-16.el8.s390x.rpm gnome-settings-daemon-debugsource-3.32.0-16.el8.s390x.rpm gnome-shell-3.32.2-40.el8.s390x.rpm gnome-shell-debuginfo-3.32.2-40.el8.s390x.rpm gnome-shell-debugsource-3.32.2-40.el8.s390x.rpm gnome-software-3.36.1-10.el8.s390x.rpm gnome-software-debuginfo-3.36.1-10.el8.s390x.rpm gnome-software-debugsource-3.36.1-10.el8.s390x.rpm gsettings-desktop-schemas-devel-3.32.0-6.el8.s390x.rpm gtk-update-icon-cache-3.22.30-8.el8.s390x.rpm gtk-update-icon-cache-debuginfo-3.22.30-8.el8.s390x.rpm gtk3-3.22.30-8.el8.s390x.rpm gtk3-debuginfo-3.22.30-8.el8.s390x.rpm gtk3-debugsource-3.22.30-8.el8.s390x.rpm gtk3-devel-3.22.30-8.el8.s390x.rpm gtk3-devel-debuginfo-3.22.30-8.el8.s390x.rpm gtk3-immodule-xim-3.22.30-8.el8.s390x.rpm gtk3-immodule-xim-debuginfo-3.22.30-8.el8.s390x.rpm gtk3-immodules-debuginfo-3.22.30-8.el8.s390x.rpm gtk3-tests-debuginfo-3.22.30-8.el8.s390x.rpm mutter-3.32.2-60.el8.s390x.rpm mutter-debuginfo-3.32.2-60.el8.s390x.rpm mutter-debugsource-3.32.2-60.el8.s390x.rpm mutter-tests-debuginfo-3.32.2-60.el8.s390x.rpm vino-3.22.0-11.el8.s390x.rpm vino-debuginfo-3.22.0-11.el8.s390x.rpm vino-debugsource-3.22.0-11.el8.s390x.rpm webkit2gtk3-2.32.3-2.el8.s390x.rpm webkit2gtk3-debuginfo-2.32.3-2.el8.s390x.rpm webkit2gtk3-debugsource-2.32.3-2.el8.s390x.rpm webkit2gtk3-devel-2.32.3-2.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.32.3-2.el8.s390x.rpm webkit2gtk3-jsc-2.32.3-2.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.32.3-2.el8.s390x.rpm webkit2gtk3-jsc-devel-2.32.3-2.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.s390x.rpm

x86_64: LibRaw-0.19.5-3.el8.i686.rpm LibRaw-0.19.5-3.el8.x86_64.rpm LibRaw-debuginfo-0.19.5-3.el8.i686.rpm LibRaw-debuginfo-0.19.5-3.el8.x86_64.rpm LibRaw-debugsource-0.19.5-3.el8.i686.rpm LibRaw-debugsource-0.19.5-3.el8.x86_64.rpm LibRaw-samples-debuginfo-0.19.5-3.el8.i686.rpm LibRaw-samples-debuginfo-0.19.5-3.el8.x86_64.rpm accountsservice-0.6.55-2.el8.x86_64.rpm accountsservice-debuginfo-0.6.55-2.el8.i686.rpm accountsservice-debuginfo-0.6.55-2.el8.x86_64.rpm accountsservice-debugsource-0.6.55-2.el8.i686.rpm accountsservice-debugsource-0.6.55-2.el8.x86_64.rpm accountsservice-libs-0.6.55-2.el8.i686.rpm accountsservice-libs-0.6.55-2.el8.x86_64.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.i686.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.x86_64.rpm gdm-40.0-15.el8.i686.rpm gdm-40.0-15.el8.x86_64.rpm gdm-debuginfo-40.0-15.el8.i686.rpm gdm-debuginfo-40.0-15.el8.x86_64.rpm gdm-debugsource-40.0-15.el8.i686.rpm gdm-debugsource-40.0-15.el8.x86_64.rpm gnome-autoar-0.2.3-2.el8.i686.rpm gnome-autoar-0.2.3-2.el8.x86_64.rpm gnome-autoar-debuginfo-0.2.3-2.el8.i686.rpm gnome-autoar-debuginfo-0.2.3-2.el8.x86_64.rpm gnome-autoar-debugsource-0.2.3-2.el8.i686.rpm gnome-autoar-debugsource-0.2.3-2.el8.x86_64.rpm gnome-calculator-3.28.2-2.el8.x86_64.rpm gnome-calculator-debuginfo-3.28.2-2.el8.x86_64.rpm gnome-calculator-debugsource-3.28.2-2.el8.x86_64.rpm gnome-control-center-3.28.2-28.el8.x86_64.rpm gnome-control-center-debuginfo-3.28.2-28.el8.x86_64.rpm gnome-control-center-debugsource-3.28.2-28.el8.x86_64.rpm gnome-online-accounts-3.28.2-3.el8.i686.rpm gnome-online-accounts-3.28.2-3.el8.x86_64.rpm gnome-online-accounts-debuginfo-3.28.2-3.el8.i686.rpm gnome-online-accounts-debuginfo-3.28.2-3.el8.x86_64.rpm gnome-online-accounts-debugsource-3.28.2-3.el8.i686.rpm gnome-online-accounts-debugsource-3.28.2-3.el8.x86_64.rpm gnome-online-accounts-devel-3.28.2-3.el8.i686.rpm gnome-online-accounts-devel-3.28.2-3.el8.x86_64.rpm gnome-session-3.28.1-13.el8.x86_64.rpm gnome-session-debuginfo-3.28.1-13.el8.x86_64.rpm gnome-session-debugsource-3.28.1-13.el8.x86_64.rpm gnome-session-kiosk-session-3.28.1-13.el8.x86_64.rpm gnome-session-wayland-session-3.28.1-13.el8.x86_64.rpm gnome-session-xsession-3.28.1-13.el8.x86_64.rpm gnome-settings-daemon-3.32.0-16.el8.x86_64.rpm gnome-settings-daemon-debuginfo-3.32.0-16.el8.x86_64.rpm gnome-settings-daemon-debugsource-3.32.0-16.el8.x86_64.rpm gnome-shell-3.32.2-40.el8.x86_64.rpm gnome-shell-debuginfo-3.32.2-40.el8.x86_64.rpm gnome-shell-debugsource-3.32.2-40.el8.x86_64.rpm gnome-software-3.36.1-10.el8.x86_64.rpm gnome-software-debuginfo-3.36.1-10.el8.x86_64.rpm gnome-software-debugsource-3.36.1-10.el8.x86_64.rpm gsettings-desktop-schemas-3.32.0-6.el8.i686.rpm gsettings-desktop-schemas-devel-3.32.0-6.el8.i686.rpm gsettings-desktop-schemas-devel-3.32.0-6.el8.x86_64.rpm gtk-update-icon-cache-3.22.30-8.el8.x86_64.rpm gtk-update-icon-cache-debuginfo-3.22.30-8.el8.i686.rpm gtk-update-icon-cache-debuginfo-3.22.30-8.el8.x86_64.rpm gtk3-3.22.30-8.el8.i686.rpm gtk3-3.22.30-8.el8.x86_64.rpm gtk3-debuginfo-3.22.30-8.el8.i686.rpm gtk3-debuginfo-3.22.30-8.el8.x86_64.rpm gtk3-debugsource-3.22.30-8.el8.i686.rpm gtk3-debugsource-3.22.30-8.el8.x86_64.rpm gtk3-devel-3.22.30-8.el8.i686.rpm gtk3-devel-3.22.30-8.el8.x86_64.rpm gtk3-devel-debuginfo-3.22.30-8.el8.i686.rpm gtk3-devel-debuginfo-3.22.30-8.el8.x86_64.rpm gtk3-immodule-xim-3.22.30-8.el8.x86_64.rpm gtk3-immodule-xim-debuginfo-3.22.30-8.el8.i686.rpm gtk3-immodule-xim-debuginfo-3.22.30-8.el8.x86_64.rpm gtk3-immodules-debuginfo-3.22.30-8.el8.i686.rpm gtk3-immodules-debuginfo-3.22.30-8.el8.x86_64.rpm gtk3-tests-debuginfo-3.22.30-8.el8.i686.rpm gtk3-tests-debuginfo-3.22.30-8.el8.x86_64.rpm mutter-3.32.2-60.el8.i686.rpm mutter-3.32.2-60.el8.x86_64.rpm mutter-debuginfo-3.32.2-60.el8.i686.rpm mutter-debuginfo-3.32.2-60.el8.x86_64.rpm mutter-debugsource-3.32.2-60.el8.i686.rpm mutter-debugsource-3.32.2-60.el8.x86_64.rpm mutter-tests-debuginfo-3.32.2-60.el8.i686.rpm mutter-tests-debuginfo-3.32.2-60.el8.x86_64.rpm vino-3.22.0-11.el8.x86_64.rpm vino-debuginfo-3.22.0-11.el8.x86_64.rpm vino-debugsource-3.22.0-11.el8.x86_64.rpm webkit2gtk3-2.32.3-2.el8.i686.rpm webkit2gtk3-2.32.3-2.el8.x86_64.rpm webkit2gtk3-debuginfo-2.32.3-2.el8.i686.rpm webkit2gtk3-debuginfo-2.32.3-2.el8.x86_64.rpm webkit2gtk3-debugsource-2.32.3-2.el8.i686.rpm webkit2gtk3-debugsource-2.32.3-2.el8.x86_64.rpm webkit2gtk3-devel-2.32.3-2.el8.i686.rpm webkit2gtk3-devel-2.32.3-2.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.32.3-2.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.32.3-2.el8.x86_64.rpm webkit2gtk3-jsc-2.32.3-2.el8.i686.rpm webkit2gtk3-jsc-2.32.3-2.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.32.3-2.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.32.3-2.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.32.3-2.el8.i686.rpm webkit2gtk3-jsc-devel-2.32.3-2.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.x86_64.rpm

Red Hat Enterprise Linux BaseOS (v. 8):

Source: gsettings-desktop-schemas-3.32.0-6.el8.src.rpm

aarch64: gsettings-desktop-schemas-3.32.0-6.el8.aarch64.rpm

ppc64le: gsettings-desktop-schemas-3.32.0-6.el8.ppc64le.rpm

s390x: gsettings-desktop-schemas-3.32.0-6.el8.s390x.rpm

x86_64: gsettings-desktop-schemas-3.32.0-6.el8.x86_64.rpm

Red Hat Enterprise Linux CRB (v. 8):

aarch64: accountsservice-debuginfo-0.6.55-2.el8.aarch64.rpm accountsservice-debugsource-0.6.55-2.el8.aarch64.rpm accountsservice-devel-0.6.55-2.el8.aarch64.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.aarch64.rpm gnome-software-debuginfo-3.36.1-10.el8.aarch64.rpm gnome-software-debugsource-3.36.1-10.el8.aarch64.rpm gnome-software-devel-3.36.1-10.el8.aarch64.rpm mutter-debuginfo-3.32.2-60.el8.aarch64.rpm mutter-debugsource-3.32.2-60.el8.aarch64.rpm mutter-devel-3.32.2-60.el8.aarch64.rpm mutter-tests-debuginfo-3.32.2-60.el8.aarch64.rpm

ppc64le: LibRaw-debuginfo-0.19.5-3.el8.ppc64le.rpm LibRaw-debugsource-0.19.5-3.el8.ppc64le.rpm LibRaw-devel-0.19.5-3.el8.ppc64le.rpm LibRaw-samples-debuginfo-0.19.5-3.el8.ppc64le.rpm accountsservice-debuginfo-0.6.55-2.el8.ppc64le.rpm accountsservice-debugsource-0.6.55-2.el8.ppc64le.rpm accountsservice-devel-0.6.55-2.el8.ppc64le.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.ppc64le.rpm gnome-software-debuginfo-3.36.1-10.el8.ppc64le.rpm gnome-software-debugsource-3.36.1-10.el8.ppc64le.rpm gnome-software-devel-3.36.1-10.el8.ppc64le.rpm mutter-debuginfo-3.32.2-60.el8.ppc64le.rpm mutter-debugsource-3.32.2-60.el8.ppc64le.rpm mutter-devel-3.32.2-60.el8.ppc64le.rpm mutter-tests-debuginfo-3.32.2-60.el8.ppc64le.rpm

s390x: accountsservice-debuginfo-0.6.55-2.el8.s390x.rpm accountsservice-debugsource-0.6.55-2.el8.s390x.rpm accountsservice-devel-0.6.55-2.el8.s390x.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.s390x.rpm gnome-software-debuginfo-3.36.1-10.el8.s390x.rpm gnome-software-debugsource-3.36.1-10.el8.s390x.rpm gnome-software-devel-3.36.1-10.el8.s390x.rpm mutter-debuginfo-3.32.2-60.el8.s390x.rpm mutter-debugsource-3.32.2-60.el8.s390x.rpm mutter-devel-3.32.2-60.el8.s390x.rpm mutter-tests-debuginfo-3.32.2-60.el8.s390x.rpm

x86_64: LibRaw-debuginfo-0.19.5-3.el8.i686.rpm LibRaw-debuginfo-0.19.5-3.el8.x86_64.rpm LibRaw-debugsource-0.19.5-3.el8.i686.rpm LibRaw-debugsource-0.19.5-3.el8.x86_64.rpm LibRaw-devel-0.19.5-3.el8.i686.rpm LibRaw-devel-0.19.5-3.el8.x86_64.rpm LibRaw-samples-debuginfo-0.19.5-3.el8.i686.rpm LibRaw-samples-debuginfo-0.19.5-3.el8.x86_64.rpm accountsservice-debuginfo-0.6.55-2.el8.i686.rpm accountsservice-debuginfo-0.6.55-2.el8.x86_64.rpm accountsservice-debugsource-0.6.55-2.el8.i686.rpm accountsservice-debugsource-0.6.55-2.el8.x86_64.rpm accountsservice-devel-0.6.55-2.el8.i686.rpm accountsservice-devel-0.6.55-2.el8.x86_64.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.i686.rpm accountsservice-libs-debuginfo-0.6.55-2.el8.x86_64.rpm gnome-software-3.36.1-10.el8.i686.rpm gnome-software-debuginfo-3.36.1-10.el8.i686.rpm gnome-software-debuginfo-3.36.1-10.el8.x86_64.rpm gnome-software-debugsource-3.36.1-10.el8.i686.rpm gnome-software-debugsource-3.36.1-10.el8.x86_64.rpm gnome-software-devel-3.36.1-10.el8.i686.rpm gnome-software-devel-3.36.1-10.el8.x86_64.rpm mutter-debuginfo-3.32.2-60.el8.i686.rpm mutter-debuginfo-3.32.2-60.el8.x86_64.rpm mutter-debugsource-3.32.2-60.el8.i686.rpm mutter-debugsource-3.32.2-60.el8.x86_64.rpm mutter-devel-3.32.2-60.el8.i686.rpm mutter-devel-3.32.2-60.el8.x86_64.rpm mutter-tests-debuginfo-3.32.2-60.el8.i686.rpm mutter-tests-debuginfo-3.32.2-60.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2020-13558 https://access.redhat.com/security/cve/CVE-2020-24870 https://access.redhat.com/security/cve/CVE-2020-27918 https://access.redhat.com/security/cve/CVE-2020-29623 https://access.redhat.com/security/cve/CVE-2020-36241 https://access.redhat.com/security/cve/CVE-2021-1765 https://access.redhat.com/security/cve/CVE-2021-1788 https://access.redhat.com/security/cve/CVE-2021-1789 https://access.redhat.com/security/cve/CVE-2021-1799 https://access.redhat.com/security/cve/CVE-2021-1801 https://access.redhat.com/security/cve/CVE-2021-1844 https://access.redhat.com/security/cve/CVE-2021-1870 https://access.redhat.com/security/cve/CVE-2021-1871 https://access.redhat.com/security/cve/CVE-2021-21775 https://access.redhat.com/security/cve/CVE-2021-21779 https://access.redhat.com/security/cve/CVE-2021-21806 https://access.redhat.com/security/cve/CVE-2021-28650 https://access.redhat.com/security/cve/CVE-2021-30663 https://access.redhat.com/security/cve/CVE-2021-30665 https://access.redhat.com/security/cve/CVE-2021-30682 https://access.redhat.com/security/cve/CVE-2021-30689 https://access.redhat.com/security/cve/CVE-2021-30720 https://access.redhat.com/security/cve/CVE-2021-30734 https://access.redhat.com/security/cve/CVE-2021-30744 https://access.redhat.com/security/cve/CVE-2021-30749 https://access.redhat.com/security/cve/CVE-2021-30758 https://access.redhat.com/security/cve/CVE-2021-30795 https://access.redhat.com/security/cve/CVE-2021-30797 https://access.redhat.com/security/cve/CVE-2021-30799 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYYrdm9zjgjWX9erEAQhgIA/+KzLn8QVHI3X8x9ufH1+nO8QXQqwTGQ0E awNXP8h4qsL7EGugHrz/KVjwaKJs/erPxh5jGl/xE1ZhngGlyStUpQkI2Y3cP2/3 05jDPPS0QEfG5Y0rlnESyPxtwQTCpqped5P7L8VtKuzRae1HV63onsBB8zpcIFF7 sTKcP6wAAjJDltUjlhnEkkE3G6Dxfv14/UowRAWoT9pa9cP0+KqdhuYKHdt3fCD7 tEItM/SFQGoCF8zvXbvAiUXfZsQ/t/Yik9O6WISTWenaxCcP43Xn7aicsvZMVOvQ w+jnH/hnMLBoPhH2k4PClsDapa/D6IrQIUrwxtgfbC4KRs0fbdrEGCPqs4nl/AdD Migcf4gCMBq0bk3/yKp+/bi+OWwRMmw3ZdkJsOTNrOAkK1UCyrpF1ULyfs+8/OC5 QnXW88fPCwhFj+KSAq5Cqfwm3hrKTCWIT/T1DQBG+J7Y9NgEx+zEXVmWaaA0z+7T qji5aUsIH+TG3t1EwtXABWGGEBRxC+svUoWNJBW1u6qwxfMx5E+hHUHhRewVYLYu SToRXa3cIX23M/XyHNXBgMCpPPw8DeY5aAA1fvKQsuMCLywDg0N3mYhvk1HUNidb Z6HmsLjLrGbkb1AAhP0V0wUuh5P6YJlL6iM49fQgztlHoBO0OAo56GBjAyT3pAAX 2rgR2Ny0wo4=gfrM -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:

The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API.

Security Fix(es):

  • mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):

2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2021-05-25-7 tvOS 14.6

tvOS 14.6 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212532.

Audio Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30707: hjy79425575 working with Trend Micro Zero Day Initiative

Audio Available for: Apple TV 4K and Apple TV HD Impact: Parsing a maliciously crafted audio file may lead to disclosure of user information Description: This issue was addressed with improved checks. CVE-2021-30685: Mickey Jin (@patch1t) of Trend Micro

CoreAudio Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted audio file may disclose restricted memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30686: Mickey Jin of Trend Micro

Crash Reporter Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may be able to modify protected parts of the file system Description: A logic issue was addressed with improved state management. CVE-2021-30727: Cees Elzinga

CVMS Available for: Apple TV 4K and Apple TV HD Impact: A local attacker may be able to elevate their privileges Description: This issue was addressed with improved checks. CVE-2021-30724: Mickey Jin (@patch1t) of Trend Micro

Heimdal Available for: Apple TV 4K and Apple TV HD Impact: A local user may be able to leak sensitive user information Description: A logic issue was addressed with improved state management. CVE-2021-30697: Gabe Kirkpatrick (@gabe_k)

Heimdal Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may cause a denial of service or potentially disclose memory contents Description: A memory corruption issue was addressed with improved state management. CVE-2021-30710: Gabe Kirkpatrick (@gabe_k)

ImageIO Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted image may lead to disclosure of user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30687: Hou JingYi (@hjy79425575) of Qihoo 360

ImageIO Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted image may lead to disclosure of user information Description: This issue was addressed with improved checks. CVE-2021-30700: Ye Zhang(@co0py_Cat) of Baidu Security

ImageIO Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30701: Mickey Jin (@patch1t) of Trend Micro and Ye Zhang of Baidu Security

ImageIO Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted ASTC file may disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30705: Ye Zhang of Baidu Security

Kernel Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved validation. CVE-2021-30740: Linus Henze (pinauten.de)

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved state management. CVE-2021-30704: an anonymous researcher

Kernel Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted message may lead to a denial of service Description: A logic issue was addressed with improved state management. CVE-2021-30715: The UK's National Cyber Security Centre (NCSC)

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with kernel privileges Description: A buffer overflow was addressed with improved size validation. CVE-2021-30736: Ian Beer of Google Project Zero

LaunchServices Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may be able to break out of its sandbox Description: This issue was addressed with improved environment sanitization. CVE-2021-30677: Ron Waisberg (@epsilan)

Security Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted certificate may lead to arbitrary code execution Description: A memory corruption issue in the ASN.1 decoder was addressed by removing the vulnerable code. CVE-2021-30665: yangkang (@dnpushme)&zerokeeper&bianliang of 360 ATA

WebKit Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A cross-origin issue with iframe elements was addressed with improved tracking of security origins. CVE-2021-21779: Marcin Towalski of Cisco Talos

WebKit Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2021-30682: an anonymous researcher and 1lastBr3ath

WebKit Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab, ASU. working with Trend Micro Zero Day Initiative CVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems) working with Trend Micro Zero Day Initiative

WebKit Available for: Apple TV 4K and Apple TV HD Impact: A malicious website may be able to access restricted ports on arbitrary servers Description: A logic issue was addressed with improved restrictions. Description: An integer overflow was addressed with improved input validation. CVE-2021-30663: an anonymous researcher

Additional recognition

ImageIO We would like to acknowledge Jzhu working with Trend Micro Zero Day Initiative and an anonymous researcher for their assistance.

WebKit We would like to acknowledge Chris Salls (@salls) of Makai Security for their assistance.

Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software."

To check the current version of software, select "Settings -> General -> About." Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmCtU9MACgkQZcsbuWJ6 jjBzuhAAmXJik2L+PmRMzs6dd1QcCSwHYi0KLG0ERapHKJsFcm5+xpv87a4AFO4p 3E6+5w9wQSWVEsQG1PIvuyV3M81xuu8xY88tAD1ce1qGA4Dny4E7RU08Y0l43j/x d1RemCf0TjwYpvX34/GaOspxFQYnRo1gWsU1v7bieF8vMHZmUOlgiNep0UEG3Kuq 7IAAsfzWS43a+nkefSDWEujMNwbg1SZKua/+BXgZC7AOXdAHItqyNBFIerUc2uSf ReHLZ5BNBKw9OsL9qoJsiLCmwxKrpUTzpQahu2gybZf65nza6QPOTohqqWq79EOD mIqOW4SQ5mVSrzMh+GB9EovMY+l5YgyHwObTUjRW+4znLU7fqNXBgwzgWoIpJdF0 rpkjP3phOGXZWwiBhRmm5iYI08HFoBfF+EoPFN5Ucl7ZWz2uF0bQlbp3yqRoGRaO ZWY2LzPIdP5zSq7rqXDaVnNFuKF93J4ouZZwVMXA4yf5wmQ3silIeJlvxxphlet8 oXv2pkewq9A81RGMlgMDZMvawQvPGkOVgeBm1coajN1swNY8esW7N6J1+rtDL0mI sulaGZCeSM9ndg5VRU2lpClFdGEUZXT2hZ8NoMV6jj48c0gZBW3M82snGD4zeRqM dcezqg6o22ZxpogRJuRf41Y87ktE5o73wgj0xu72MQoxK86+Ek0= =BeQR -----END PGP SIGNATURE-----

. CVE-2021-30663: an anonymous researcher

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202202-01


                                       https://security.gentoo.org/

Severity: High Title: WebkitGTK+: Multiple vulnerabilities Date: February 01, 2022 Bugs: #779175, #801400, #813489, #819522, #820434, #829723, #831739 ID: 202202-01


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in the arbitrary execution of code.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.34.4 >= 2.34.4

Description

Multiple vulnerabilities have been discovered in WebkitGTK+. Please review the CVE identifiers referenced below for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.34.4"

References

[ 1 ] CVE-2021-30848 https://nvd.nist.gov/vuln/detail/CVE-2021-30848 [ 2 ] CVE-2021-30888 https://nvd.nist.gov/vuln/detail/CVE-2021-30888 [ 3 ] CVE-2021-30682 https://nvd.nist.gov/vuln/detail/CVE-2021-30682 [ 4 ] CVE-2021-30889 https://nvd.nist.gov/vuln/detail/CVE-2021-30889 [ 5 ] CVE-2021-30666 https://nvd.nist.gov/vuln/detail/CVE-2021-30666 [ 6 ] CVE-2021-30665 https://nvd.nist.gov/vuln/detail/CVE-2021-30665 [ 7 ] CVE-2021-30890 https://nvd.nist.gov/vuln/detail/CVE-2021-30890 [ 8 ] CVE-2021-30661 https://nvd.nist.gov/vuln/detail/CVE-2021-30661 [ 9 ] WSA-2021-0005 https://webkitgtk.org/security/WSA-2021-0005.html [ 10 ] CVE-2021-30761 https://nvd.nist.gov/vuln/detail/CVE-2021-30761 [ 11 ] CVE-2021-30897 https://nvd.nist.gov/vuln/detail/CVE-2021-30897 [ 12 ] CVE-2021-30823 https://nvd.nist.gov/vuln/detail/CVE-2021-30823 [ 13 ] CVE-2021-30734 https://nvd.nist.gov/vuln/detail/CVE-2021-30734 [ 14 ] CVE-2021-30934 https://nvd.nist.gov/vuln/detail/CVE-2021-30934 [ 15 ] CVE-2021-1871 https://nvd.nist.gov/vuln/detail/CVE-2021-1871 [ 16 ] CVE-2021-30762 https://nvd.nist.gov/vuln/detail/CVE-2021-30762 [ 17 ] WSA-2021-0006 https://webkitgtk.org/security/WSA-2021-0006.html [ 18 ] CVE-2021-30797 https://nvd.nist.gov/vuln/detail/CVE-2021-30797 [ 19 ] CVE-2021-30936 https://nvd.nist.gov/vuln/detail/CVE-2021-30936 [ 20 ] CVE-2021-30663 https://nvd.nist.gov/vuln/detail/CVE-2021-30663 [ 21 ] CVE-2021-1825 https://nvd.nist.gov/vuln/detail/CVE-2021-1825 [ 22 ] CVE-2021-30951 https://nvd.nist.gov/vuln/detail/CVE-2021-30951 [ 23 ] CVE-2021-30952 https://nvd.nist.gov/vuln/detail/CVE-2021-30952 [ 24 ] CVE-2021-1788 https://nvd.nist.gov/vuln/detail/CVE-2021-1788 [ 25 ] CVE-2021-1820 https://nvd.nist.gov/vuln/detail/CVE-2021-1820 [ 26 ] CVE-2021-30953 https://nvd.nist.gov/vuln/detail/CVE-2021-30953 [ 27 ] CVE-2021-30749 https://nvd.nist.gov/vuln/detail/CVE-2021-30749 [ 28 ] CVE-2021-30849 https://nvd.nist.gov/vuln/detail/CVE-2021-30849 [ 29 ] CVE-2021-1826 https://nvd.nist.gov/vuln/detail/CVE-2021-1826 [ 30 ] CVE-2021-30836 https://nvd.nist.gov/vuln/detail/CVE-2021-30836 [ 31 ] CVE-2021-30954 https://nvd.nist.gov/vuln/detail/CVE-2021-30954 [ 32 ] CVE-2021-30984 https://nvd.nist.gov/vuln/detail/CVE-2021-30984 [ 33 ] CVE-2021-30851 https://nvd.nist.gov/vuln/detail/CVE-2021-30851 [ 34 ] CVE-2021-30758 https://nvd.nist.gov/vuln/detail/CVE-2021-30758 [ 35 ] CVE-2021-42762 https://nvd.nist.gov/vuln/detail/CVE-2021-42762 [ 36 ] CVE-2021-1844 https://nvd.nist.gov/vuln/detail/CVE-2021-1844 [ 37 ] CVE-2021-30689 https://nvd.nist.gov/vuln/detail/CVE-2021-30689 [ 38 ] CVE-2021-45482 https://nvd.nist.gov/vuln/detail/CVE-2021-45482 [ 39 ] CVE-2021-30858 https://nvd.nist.gov/vuln/detail/CVE-2021-30858 [ 40 ] CVE-2021-21779 https://nvd.nist.gov/vuln/detail/CVE-2021-21779 [ 41 ] WSA-2021-0004 https://webkitgtk.org/security/WSA-2021-0004.html [ 42 ] CVE-2021-30846 https://nvd.nist.gov/vuln/detail/CVE-2021-30846 [ 43 ] CVE-2021-30744 https://nvd.nist.gov/vuln/detail/CVE-2021-30744 [ 44 ] CVE-2021-30809 https://nvd.nist.gov/vuln/detail/CVE-2021-30809 [ 45 ] CVE-2021-30884 https://nvd.nist.gov/vuln/detail/CVE-2021-30884 [ 46 ] CVE-2021-30720 https://nvd.nist.gov/vuln/detail/CVE-2021-30720 [ 47 ] CVE-2021-30799 https://nvd.nist.gov/vuln/detail/CVE-2021-30799 [ 48 ] CVE-2021-30795 https://nvd.nist.gov/vuln/detail/CVE-2021-30795 [ 49 ] CVE-2021-1817 https://nvd.nist.gov/vuln/detail/CVE-2021-1817 [ 50 ] CVE-2021-21775 https://nvd.nist.gov/vuln/detail/CVE-2021-21775 [ 51 ] CVE-2021-30887 https://nvd.nist.gov/vuln/detail/CVE-2021-30887 [ 52 ] CVE-2021-21806 https://nvd.nist.gov/vuln/detail/CVE-2021-21806 [ 53 ] CVE-2021-30818 https://nvd.nist.gov/vuln/detail/CVE-2021-30818

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202202-01

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202109-1315",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.4.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.5.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.5.3"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.5.1"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.3.1"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.6"
      },
      {
        "model": "iphone os",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0"
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "watchos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "tvos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "162825"
      },
      {
        "db": "PACKETSTORM",
        "id": "162448"
      },
      {
        "db": "PACKETSTORM",
        "id": "162449"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      }
    ],
    "trust": 0.9
  },
  "cve": "CVE-2021-30665",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-30665",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.8,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-390398",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2021-30665",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2021-30665",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-30665",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2021-30665",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-30665",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202105-060",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-390398",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A memory corruption issue was addressed with improved state management. This issue is fixed in watchOS 7.4.1, iOS 14.5.1 and iPadOS 14.5.1, tvOS 14.6, iOS 12.5.3, macOS Big Sur 11.3.1. Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.. plural Apple The product contains a vulnerability related to out-of-bounds writes.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Apple Safari is a web browser of Apple (Apple), the default browser included with Mac OS X and iOS operating systems. A buffer error vulnerability exists in Apple Safari that stems from a bounds error in WebKit. A remote attacker could exploit this vulnerability to execute arbitrary code on the target system. The following products and versions are affected: Apple Safari: 14.0, 14.0.1, 14.0.2, 14.0.3, 14.0.3-14610.4.3.1.7, 14.0.3-15610.4.3.1.7, 14.1. APPLE-SA-2021-05-03-3 watchOS 7.4.1. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: GNOME security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2021:4381-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:4381\nIssue date:        2021-11-09\nCVE Names:         CVE-2020-13558 CVE-2020-24870 CVE-2020-27918\n                   CVE-2020-29623 CVE-2020-36241 CVE-2021-1765\n                   CVE-2021-1788 CVE-2021-1789 CVE-2021-1799\n                   CVE-2021-1801 CVE-2021-1844 CVE-2021-1870\n                   CVE-2021-1871 CVE-2021-21775 CVE-2021-21779\n                   CVE-2021-21806 CVE-2021-28650 CVE-2021-30663\n                   CVE-2021-30665 CVE-2021-30682 CVE-2021-30689\n                   CVE-2021-30720 CVE-2021-30734 CVE-2021-30744\n                   CVE-2021-30749 CVE-2021-30758 CVE-2021-30795\n                   CVE-2021-30797 CVE-2021-30799\n====================================================================\n1. Summary:\n\nAn update for GNOME is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux CRB (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nGNOME is the default desktop environment of Red Hat Enterprise Linux. \n\nThe following packages have been upgraded to a later upstream version: gdm\n(40.0), webkit2gtk3 (2.32.3). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nGDM must be restarted for this update to take effect. The GNOME session\nmust be restarted (log out, then log back in) for this update to take\neffect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1651378 - [RFE] Provide a mechanism for persistently showing the security level of a machine at login time\n1770302 - disable show text in GDM login/lock screen (patched in RHEL 7.8)\n1791478 - Cannot completely disable odrs (Gnome Ratings) from the Software application in Gnome Desktop\n1813727 - Files copied from NFS4 to Desktop can\u0027t be opened\n1854679 - [RFE] Disable left edge gesture\n1873297 - Gnome-software coredumps when run as root in terminal\n1873488 - GTK3 prints errors with overlay scrollbar disabled\n1888404 - Updates page hides ongoing updates on refresh\n1894613 - [RFE] Re-inclusion of workspace renaming in GNOME 3. \n1897932 - JS ERROR: Error: Extension point conflict: there is already a status indicator for role ... \n1904139 - Automatic Logout Feature not working\n1905000 - Desktop refresh broken after unlock\n1909300 - gdm isn\u0027t killing the login screen on login after all, should rebase to latest release\n1914925 - RFE: add patch to set grub boot_success flag on shutdown/reboot\n1924725 - [Wayland] Double-touch desktop icons fails sometimes\n1925640 - CVE-2020-36241 gnome-autoar: Directory traversal via directory symbolic links pointing outside of the destination directory\n1928794 - CVE-2020-24870 LibRaw: Stack buffer overflow in LibRaw::identify_process_dng_fields() in identify.cpp\n1928886 - CVE-2020-13558 webkitgtk: Use-after-free in AudioSourceProviderGStreamer leading to arbitrary code execution\n1935261 - [RFE] Enable connecting to WiFI and VPN connections at the GDM login\n1937416 - Rebase WebKitGTK to 2.32\n1937866 - Unable to disable onscreen keyboard in touch screen machine [rhel-8.5.0]\n1938937 - Mutter: mouse click doesn\u0027t work when using 10-bit graphic monitor [rhel-8.5.0]\n1940026 - CVE-2021-28650 gnome-autoar: Directory traversal via directory symbolic links pointing outside of the destination directory (incomplete CVE-2020-36241 fix)\n1944323 - CVE-2020-27918 webkitgtk: Use-after-free leading to arbitrary code execution\n1944329 - CVE-2020-29623 webkitgtk: User may be unable to fully delete browsing history\n1944333 - CVE-2021-1765 webkitgtk: IFrame sandboxing policy violation\n1944337 - CVE-2021-1789 webkitgtk: Type confusion issue leading to arbitrary code execution\n1944340 - CVE-2021-1799 webkitgtk: Access to restricted ports on arbitrary servers via port redirection\n1944343 - CVE-2021-1801 webkitgtk: IFrame sandboxing policy violation\n1944350 - CVE-2021-1870 webkitgtk: Logic issue leading to arbitrary code execution\n1944859 - CVE-2021-1788 webkitgtk: Use-after-free leading to arbitrary code execution\n1944862 - CVE-2021-1844 webkitgtk: Memory corruption issue leading to arbitrary code execution\n1944867 - CVE-2021-1871 webkitgtk: Logic issue leading to arbitrary code execution\n1949176 - GNOME Shell on Wayland does not generate xauth data, needed for X forwarding over SSH\n1951086 - Disable the Facebook provider\n1952136 - Disable the Foursquare provider\n1955754 - gnome-session kiosk-session support still isn\u0027t up to muster\n1957705 - RFE: make gnome-calculator internet access attemps configurable system-wide\n1960705 - Vino nonfunctional in FIPS mode\n1962049 - [Hyper-V][RHEL8.5]gdm: Guest with 1 vcpu start GUI failed on Hyper-V\n1971507 - gnome-shell JS ERROR Error calling onComplete: TypeError this._dialog.actor is undefined _hideLockScreenComplete updateTweens\n1971534 - gnome-shell[2343]: gsignal.c:2642: instance \u00270x5583c61f9280\u0027 has no handler with id \u002723831\u0027\n1972545 - flatpak: Prefer runtime from the same origin as the application\n1978287 - gnome-shell to  include / Documented - PolicyKit-authentication-agent\n1978505 - Gnome Software development package is missing important header files. \n1978612 - pt_BR translations for \"Register System\" panel\n1980441 - CVE-2021-21806 webkitgtk: Use-after-free in fireEventListeners leading to arbitrary code execution\n1980661 - \"Screen Lock disabled\" notification appears on first login after disabling gdm and notification pop-up. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nLibRaw-0.19.5-3.el8.src.rpm\naccountsservice-0.6.55-2.el8.src.rpm\ngdm-40.0-15.el8.src.rpm\ngnome-autoar-0.2.3-2.el8.src.rpm\ngnome-calculator-3.28.2-2.el8.src.rpm\ngnome-control-center-3.28.2-28.el8.src.rpm\ngnome-online-accounts-3.28.2-3.el8.src.rpm\ngnome-session-3.28.1-13.el8.src.rpm\ngnome-settings-daemon-3.32.0-16.el8.src.rpm\ngnome-shell-3.32.2-40.el8.src.rpm\ngnome-shell-extensions-3.32.1-20.el8.src.rpm\ngnome-software-3.36.1-10.el8.src.rpm\ngtk3-3.22.30-8.el8.src.rpm\nmutter-3.32.2-60.el8.src.rpm\nvino-3.22.0-11.el8.src.rpm\nwebkit2gtk3-2.32.3-2.el8.src.rpm\n\naarch64:\naccountsservice-0.6.55-2.el8.aarch64.rpm\naccountsservice-debuginfo-0.6.55-2.el8.aarch64.rpm\naccountsservice-debugsource-0.6.55-2.el8.aarch64.rpm\naccountsservice-libs-0.6.55-2.el8.aarch64.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.aarch64.rpm\ngdm-40.0-15.el8.aarch64.rpm\ngdm-debuginfo-40.0-15.el8.aarch64.rpm\ngdm-debugsource-40.0-15.el8.aarch64.rpm\ngnome-autoar-0.2.3-2.el8.aarch64.rpm\ngnome-autoar-debuginfo-0.2.3-2.el8.aarch64.rpm\ngnome-autoar-debugsource-0.2.3-2.el8.aarch64.rpm\ngnome-calculator-3.28.2-2.el8.aarch64.rpm\ngnome-calculator-debuginfo-3.28.2-2.el8.aarch64.rpm\ngnome-calculator-debugsource-3.28.2-2.el8.aarch64.rpm\ngnome-control-center-3.28.2-28.el8.aarch64.rpm\ngnome-control-center-debuginfo-3.28.2-28.el8.aarch64.rpm\ngnome-control-center-debugsource-3.28.2-28.el8.aarch64.rpm\ngnome-online-accounts-3.28.2-3.el8.aarch64.rpm\ngnome-online-accounts-debuginfo-3.28.2-3.el8.aarch64.rpm\ngnome-online-accounts-debugsource-3.28.2-3.el8.aarch64.rpm\ngnome-online-accounts-devel-3.28.2-3.el8.aarch64.rpm\ngnome-session-3.28.1-13.el8.aarch64.rpm\ngnome-session-debuginfo-3.28.1-13.el8.aarch64.rpm\ngnome-session-debugsource-3.28.1-13.el8.aarch64.rpm\ngnome-session-kiosk-session-3.28.1-13.el8.aarch64.rpm\ngnome-session-wayland-session-3.28.1-13.el8.aarch64.rpm\ngnome-session-xsession-3.28.1-13.el8.aarch64.rpm\ngnome-settings-daemon-3.32.0-16.el8.aarch64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-16.el8.aarch64.rpm\ngnome-settings-daemon-debugsource-3.32.0-16.el8.aarch64.rpm\ngnome-shell-3.32.2-40.el8.aarch64.rpm\ngnome-shell-debuginfo-3.32.2-40.el8.aarch64.rpm\ngnome-shell-debugsource-3.32.2-40.el8.aarch64.rpm\ngnome-software-3.36.1-10.el8.aarch64.rpm\ngnome-software-debuginfo-3.36.1-10.el8.aarch64.rpm\ngnome-software-debugsource-3.36.1-10.el8.aarch64.rpm\ngsettings-desktop-schemas-devel-3.32.0-6.el8.aarch64.rpm\ngtk-update-icon-cache-3.22.30-8.el8.aarch64.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-8.el8.aarch64.rpm\ngtk3-3.22.30-8.el8.aarch64.rpm\ngtk3-debuginfo-3.22.30-8.el8.aarch64.rpm\ngtk3-debugsource-3.22.30-8.el8.aarch64.rpm\ngtk3-devel-3.22.30-8.el8.aarch64.rpm\ngtk3-devel-debuginfo-3.22.30-8.el8.aarch64.rpm\ngtk3-immodule-xim-3.22.30-8.el8.aarch64.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-8.el8.aarch64.rpm\ngtk3-immodules-debuginfo-3.22.30-8.el8.aarch64.rpm\ngtk3-tests-debuginfo-3.22.30-8.el8.aarch64.rpm\nmutter-3.32.2-60.el8.aarch64.rpm\nmutter-debuginfo-3.32.2-60.el8.aarch64.rpm\nmutter-debugsource-3.32.2-60.el8.aarch64.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.aarch64.rpm\nvino-3.22.0-11.el8.aarch64.rpm\nvino-debuginfo-3.22.0-11.el8.aarch64.rpm\nvino-debugsource-3.22.0-11.el8.aarch64.rpm\nwebkit2gtk3-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-devel-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.32.3-2.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.aarch64.rpm\n\nnoarch:\ngnome-classic-session-3.32.1-20.el8.noarch.rpm\ngnome-control-center-filesystem-3.28.2-28.el8.noarch.rpm\ngnome-shell-extension-apps-menu-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-auto-move-windows-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-common-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-dash-to-dock-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-desktop-icons-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-disable-screenshield-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-drive-menu-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-gesture-inhibitor-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-horizontal-workspaces-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-launch-new-instance-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-native-window-placement-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-no-hot-corner-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-panel-favorites-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-places-menu-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-screenshot-window-sizer-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-systemMonitor-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-top-icons-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-updates-dialog-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-user-theme-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-window-grouper-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-window-list-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-windowsNavigator-3.32.1-20.el8.noarch.rpm\ngnome-shell-extension-workspace-indicator-3.32.1-20.el8.noarch.rpm\n\nppc64le:\nLibRaw-0.19.5-3.el8.ppc64le.rpm\nLibRaw-debuginfo-0.19.5-3.el8.ppc64le.rpm\nLibRaw-debugsource-0.19.5-3.el8.ppc64le.rpm\nLibRaw-samples-debuginfo-0.19.5-3.el8.ppc64le.rpm\naccountsservice-0.6.55-2.el8.ppc64le.rpm\naccountsservice-debuginfo-0.6.55-2.el8.ppc64le.rpm\naccountsservice-debugsource-0.6.55-2.el8.ppc64le.rpm\naccountsservice-libs-0.6.55-2.el8.ppc64le.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.ppc64le.rpm\ngdm-40.0-15.el8.ppc64le.rpm\ngdm-debuginfo-40.0-15.el8.ppc64le.rpm\ngdm-debugsource-40.0-15.el8.ppc64le.rpm\ngnome-autoar-0.2.3-2.el8.ppc64le.rpm\ngnome-autoar-debuginfo-0.2.3-2.el8.ppc64le.rpm\ngnome-autoar-debugsource-0.2.3-2.el8.ppc64le.rpm\ngnome-calculator-3.28.2-2.el8.ppc64le.rpm\ngnome-calculator-debuginfo-3.28.2-2.el8.ppc64le.rpm\ngnome-calculator-debugsource-3.28.2-2.el8.ppc64le.rpm\ngnome-control-center-3.28.2-28.el8.ppc64le.rpm\ngnome-control-center-debuginfo-3.28.2-28.el8.ppc64le.rpm\ngnome-control-center-debugsource-3.28.2-28.el8.ppc64le.rpm\ngnome-online-accounts-3.28.2-3.el8.ppc64le.rpm\ngnome-online-accounts-debuginfo-3.28.2-3.el8.ppc64le.rpm\ngnome-online-accounts-debugsource-3.28.2-3.el8.ppc64le.rpm\ngnome-online-accounts-devel-3.28.2-3.el8.ppc64le.rpm\ngnome-session-3.28.1-13.el8.ppc64le.rpm\ngnome-session-debuginfo-3.28.1-13.el8.ppc64le.rpm\ngnome-session-debugsource-3.28.1-13.el8.ppc64le.rpm\ngnome-session-kiosk-session-3.28.1-13.el8.ppc64le.rpm\ngnome-session-wayland-session-3.28.1-13.el8.ppc64le.rpm\ngnome-session-xsession-3.28.1-13.el8.ppc64le.rpm\ngnome-settings-daemon-3.32.0-16.el8.ppc64le.rpm\ngnome-settings-daemon-debuginfo-3.32.0-16.el8.ppc64le.rpm\ngnome-settings-daemon-debugsource-3.32.0-16.el8.ppc64le.rpm\ngnome-shell-3.32.2-40.el8.ppc64le.rpm\ngnome-shell-debuginfo-3.32.2-40.el8.ppc64le.rpm\ngnome-shell-debugsource-3.32.2-40.el8.ppc64le.rpm\ngnome-software-3.36.1-10.el8.ppc64le.rpm\ngnome-software-debuginfo-3.36.1-10.el8.ppc64le.rpm\ngnome-software-debugsource-3.36.1-10.el8.ppc64le.rpm\ngsettings-desktop-schemas-devel-3.32.0-6.el8.ppc64le.rpm\ngtk-update-icon-cache-3.22.30-8.el8.ppc64le.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-8.el8.ppc64le.rpm\ngtk3-3.22.30-8.el8.ppc64le.rpm\ngtk3-debuginfo-3.22.30-8.el8.ppc64le.rpm\ngtk3-debugsource-3.22.30-8.el8.ppc64le.rpm\ngtk3-devel-3.22.30-8.el8.ppc64le.rpm\ngtk3-devel-debuginfo-3.22.30-8.el8.ppc64le.rpm\ngtk3-immodule-xim-3.22.30-8.el8.ppc64le.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-8.el8.ppc64le.rpm\ngtk3-immodules-debuginfo-3.22.30-8.el8.ppc64le.rpm\ngtk3-tests-debuginfo-3.22.30-8.el8.ppc64le.rpm\nmutter-3.32.2-60.el8.ppc64le.rpm\nmutter-debuginfo-3.32.2-60.el8.ppc64le.rpm\nmutter-debugsource-3.32.2-60.el8.ppc64le.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.ppc64le.rpm\nvino-3.22.0-11.el8.ppc64le.rpm\nvino-debuginfo-3.22.0-11.el8.ppc64le.rpm\nvino-debugsource-3.22.0-11.el8.ppc64le.rpm\nwebkit2gtk3-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.32.3-2.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.ppc64le.rpm\n\ns390x:\naccountsservice-0.6.55-2.el8.s390x.rpm\naccountsservice-debuginfo-0.6.55-2.el8.s390x.rpm\naccountsservice-debugsource-0.6.55-2.el8.s390x.rpm\naccountsservice-libs-0.6.55-2.el8.s390x.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.s390x.rpm\ngdm-40.0-15.el8.s390x.rpm\ngdm-debuginfo-40.0-15.el8.s390x.rpm\ngdm-debugsource-40.0-15.el8.s390x.rpm\ngnome-autoar-0.2.3-2.el8.s390x.rpm\ngnome-autoar-debuginfo-0.2.3-2.el8.s390x.rpm\ngnome-autoar-debugsource-0.2.3-2.el8.s390x.rpm\ngnome-calculator-3.28.2-2.el8.s390x.rpm\ngnome-calculator-debuginfo-3.28.2-2.el8.s390x.rpm\ngnome-calculator-debugsource-3.28.2-2.el8.s390x.rpm\ngnome-control-center-3.28.2-28.el8.s390x.rpm\ngnome-control-center-debuginfo-3.28.2-28.el8.s390x.rpm\ngnome-control-center-debugsource-3.28.2-28.el8.s390x.rpm\ngnome-online-accounts-3.28.2-3.el8.s390x.rpm\ngnome-online-accounts-debuginfo-3.28.2-3.el8.s390x.rpm\ngnome-online-accounts-debugsource-3.28.2-3.el8.s390x.rpm\ngnome-online-accounts-devel-3.28.2-3.el8.s390x.rpm\ngnome-session-3.28.1-13.el8.s390x.rpm\ngnome-session-debuginfo-3.28.1-13.el8.s390x.rpm\ngnome-session-debugsource-3.28.1-13.el8.s390x.rpm\ngnome-session-kiosk-session-3.28.1-13.el8.s390x.rpm\ngnome-session-wayland-session-3.28.1-13.el8.s390x.rpm\ngnome-session-xsession-3.28.1-13.el8.s390x.rpm\ngnome-settings-daemon-3.32.0-16.el8.s390x.rpm\ngnome-settings-daemon-debuginfo-3.32.0-16.el8.s390x.rpm\ngnome-settings-daemon-debugsource-3.32.0-16.el8.s390x.rpm\ngnome-shell-3.32.2-40.el8.s390x.rpm\ngnome-shell-debuginfo-3.32.2-40.el8.s390x.rpm\ngnome-shell-debugsource-3.32.2-40.el8.s390x.rpm\ngnome-software-3.36.1-10.el8.s390x.rpm\ngnome-software-debuginfo-3.36.1-10.el8.s390x.rpm\ngnome-software-debugsource-3.36.1-10.el8.s390x.rpm\ngsettings-desktop-schemas-devel-3.32.0-6.el8.s390x.rpm\ngtk-update-icon-cache-3.22.30-8.el8.s390x.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-8.el8.s390x.rpm\ngtk3-3.22.30-8.el8.s390x.rpm\ngtk3-debuginfo-3.22.30-8.el8.s390x.rpm\ngtk3-debugsource-3.22.30-8.el8.s390x.rpm\ngtk3-devel-3.22.30-8.el8.s390x.rpm\ngtk3-devel-debuginfo-3.22.30-8.el8.s390x.rpm\ngtk3-immodule-xim-3.22.30-8.el8.s390x.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-8.el8.s390x.rpm\ngtk3-immodules-debuginfo-3.22.30-8.el8.s390x.rpm\ngtk3-tests-debuginfo-3.22.30-8.el8.s390x.rpm\nmutter-3.32.2-60.el8.s390x.rpm\nmutter-debuginfo-3.32.2-60.el8.s390x.rpm\nmutter-debugsource-3.32.2-60.el8.s390x.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.s390x.rpm\nvino-3.22.0-11.el8.s390x.rpm\nvino-debuginfo-3.22.0-11.el8.s390x.rpm\nvino-debugsource-3.22.0-11.el8.s390x.rpm\nwebkit2gtk3-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-devel-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-jsc-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.32.3-2.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.s390x.rpm\n\nx86_64:\nLibRaw-0.19.5-3.el8.i686.rpm\nLibRaw-0.19.5-3.el8.x86_64.rpm\nLibRaw-debuginfo-0.19.5-3.el8.i686.rpm\nLibRaw-debuginfo-0.19.5-3.el8.x86_64.rpm\nLibRaw-debugsource-0.19.5-3.el8.i686.rpm\nLibRaw-debugsource-0.19.5-3.el8.x86_64.rpm\nLibRaw-samples-debuginfo-0.19.5-3.el8.i686.rpm\nLibRaw-samples-debuginfo-0.19.5-3.el8.x86_64.rpm\naccountsservice-0.6.55-2.el8.x86_64.rpm\naccountsservice-debuginfo-0.6.55-2.el8.i686.rpm\naccountsservice-debuginfo-0.6.55-2.el8.x86_64.rpm\naccountsservice-debugsource-0.6.55-2.el8.i686.rpm\naccountsservice-debugsource-0.6.55-2.el8.x86_64.rpm\naccountsservice-libs-0.6.55-2.el8.i686.rpm\naccountsservice-libs-0.6.55-2.el8.x86_64.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.i686.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.x86_64.rpm\ngdm-40.0-15.el8.i686.rpm\ngdm-40.0-15.el8.x86_64.rpm\ngdm-debuginfo-40.0-15.el8.i686.rpm\ngdm-debuginfo-40.0-15.el8.x86_64.rpm\ngdm-debugsource-40.0-15.el8.i686.rpm\ngdm-debugsource-40.0-15.el8.x86_64.rpm\ngnome-autoar-0.2.3-2.el8.i686.rpm\ngnome-autoar-0.2.3-2.el8.x86_64.rpm\ngnome-autoar-debuginfo-0.2.3-2.el8.i686.rpm\ngnome-autoar-debuginfo-0.2.3-2.el8.x86_64.rpm\ngnome-autoar-debugsource-0.2.3-2.el8.i686.rpm\ngnome-autoar-debugsource-0.2.3-2.el8.x86_64.rpm\ngnome-calculator-3.28.2-2.el8.x86_64.rpm\ngnome-calculator-debuginfo-3.28.2-2.el8.x86_64.rpm\ngnome-calculator-debugsource-3.28.2-2.el8.x86_64.rpm\ngnome-control-center-3.28.2-28.el8.x86_64.rpm\ngnome-control-center-debuginfo-3.28.2-28.el8.x86_64.rpm\ngnome-control-center-debugsource-3.28.2-28.el8.x86_64.rpm\ngnome-online-accounts-3.28.2-3.el8.i686.rpm\ngnome-online-accounts-3.28.2-3.el8.x86_64.rpm\ngnome-online-accounts-debuginfo-3.28.2-3.el8.i686.rpm\ngnome-online-accounts-debuginfo-3.28.2-3.el8.x86_64.rpm\ngnome-online-accounts-debugsource-3.28.2-3.el8.i686.rpm\ngnome-online-accounts-debugsource-3.28.2-3.el8.x86_64.rpm\ngnome-online-accounts-devel-3.28.2-3.el8.i686.rpm\ngnome-online-accounts-devel-3.28.2-3.el8.x86_64.rpm\ngnome-session-3.28.1-13.el8.x86_64.rpm\ngnome-session-debuginfo-3.28.1-13.el8.x86_64.rpm\ngnome-session-debugsource-3.28.1-13.el8.x86_64.rpm\ngnome-session-kiosk-session-3.28.1-13.el8.x86_64.rpm\ngnome-session-wayland-session-3.28.1-13.el8.x86_64.rpm\ngnome-session-xsession-3.28.1-13.el8.x86_64.rpm\ngnome-settings-daemon-3.32.0-16.el8.x86_64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-16.el8.x86_64.rpm\ngnome-settings-daemon-debugsource-3.32.0-16.el8.x86_64.rpm\ngnome-shell-3.32.2-40.el8.x86_64.rpm\ngnome-shell-debuginfo-3.32.2-40.el8.x86_64.rpm\ngnome-shell-debugsource-3.32.2-40.el8.x86_64.rpm\ngnome-software-3.36.1-10.el8.x86_64.rpm\ngnome-software-debuginfo-3.36.1-10.el8.x86_64.rpm\ngnome-software-debugsource-3.36.1-10.el8.x86_64.rpm\ngsettings-desktop-schemas-3.32.0-6.el8.i686.rpm\ngsettings-desktop-schemas-devel-3.32.0-6.el8.i686.rpm\ngsettings-desktop-schemas-devel-3.32.0-6.el8.x86_64.rpm\ngtk-update-icon-cache-3.22.30-8.el8.x86_64.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-8.el8.i686.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-8.el8.x86_64.rpm\ngtk3-3.22.30-8.el8.i686.rpm\ngtk3-3.22.30-8.el8.x86_64.rpm\ngtk3-debuginfo-3.22.30-8.el8.i686.rpm\ngtk3-debuginfo-3.22.30-8.el8.x86_64.rpm\ngtk3-debugsource-3.22.30-8.el8.i686.rpm\ngtk3-debugsource-3.22.30-8.el8.x86_64.rpm\ngtk3-devel-3.22.30-8.el8.i686.rpm\ngtk3-devel-3.22.30-8.el8.x86_64.rpm\ngtk3-devel-debuginfo-3.22.30-8.el8.i686.rpm\ngtk3-devel-debuginfo-3.22.30-8.el8.x86_64.rpm\ngtk3-immodule-xim-3.22.30-8.el8.x86_64.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-8.el8.i686.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-8.el8.x86_64.rpm\ngtk3-immodules-debuginfo-3.22.30-8.el8.i686.rpm\ngtk3-immodules-debuginfo-3.22.30-8.el8.x86_64.rpm\ngtk3-tests-debuginfo-3.22.30-8.el8.i686.rpm\ngtk3-tests-debuginfo-3.22.30-8.el8.x86_64.rpm\nmutter-3.32.2-60.el8.i686.rpm\nmutter-3.32.2-60.el8.x86_64.rpm\nmutter-debuginfo-3.32.2-60.el8.i686.rpm\nmutter-debuginfo-3.32.2-60.el8.x86_64.rpm\nmutter-debugsource-3.32.2-60.el8.i686.rpm\nmutter-debugsource-3.32.2-60.el8.x86_64.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.i686.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.x86_64.rpm\nvino-3.22.0-11.el8.x86_64.rpm\nvino-debuginfo-3.22.0-11.el8.x86_64.rpm\nvino-debugsource-3.22.0-11.el8.x86_64.rpm\nwebkit2gtk3-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-debugsource-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-devel-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-devel-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-jsc-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.32.3-2.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.32.3-2.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\ngsettings-desktop-schemas-3.32.0-6.el8.src.rpm\n\naarch64:\ngsettings-desktop-schemas-3.32.0-6.el8.aarch64.rpm\n\nppc64le:\ngsettings-desktop-schemas-3.32.0-6.el8.ppc64le.rpm\n\ns390x:\ngsettings-desktop-schemas-3.32.0-6.el8.s390x.rpm\n\nx86_64:\ngsettings-desktop-schemas-3.32.0-6.el8.x86_64.rpm\n\nRed Hat Enterprise Linux CRB (v. 8):\n\naarch64:\naccountsservice-debuginfo-0.6.55-2.el8.aarch64.rpm\naccountsservice-debugsource-0.6.55-2.el8.aarch64.rpm\naccountsservice-devel-0.6.55-2.el8.aarch64.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.aarch64.rpm\ngnome-software-debuginfo-3.36.1-10.el8.aarch64.rpm\ngnome-software-debugsource-3.36.1-10.el8.aarch64.rpm\ngnome-software-devel-3.36.1-10.el8.aarch64.rpm\nmutter-debuginfo-3.32.2-60.el8.aarch64.rpm\nmutter-debugsource-3.32.2-60.el8.aarch64.rpm\nmutter-devel-3.32.2-60.el8.aarch64.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.aarch64.rpm\n\nppc64le:\nLibRaw-debuginfo-0.19.5-3.el8.ppc64le.rpm\nLibRaw-debugsource-0.19.5-3.el8.ppc64le.rpm\nLibRaw-devel-0.19.5-3.el8.ppc64le.rpm\nLibRaw-samples-debuginfo-0.19.5-3.el8.ppc64le.rpm\naccountsservice-debuginfo-0.6.55-2.el8.ppc64le.rpm\naccountsservice-debugsource-0.6.55-2.el8.ppc64le.rpm\naccountsservice-devel-0.6.55-2.el8.ppc64le.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.ppc64le.rpm\ngnome-software-debuginfo-3.36.1-10.el8.ppc64le.rpm\ngnome-software-debugsource-3.36.1-10.el8.ppc64le.rpm\ngnome-software-devel-3.36.1-10.el8.ppc64le.rpm\nmutter-debuginfo-3.32.2-60.el8.ppc64le.rpm\nmutter-debugsource-3.32.2-60.el8.ppc64le.rpm\nmutter-devel-3.32.2-60.el8.ppc64le.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.ppc64le.rpm\n\ns390x:\naccountsservice-debuginfo-0.6.55-2.el8.s390x.rpm\naccountsservice-debugsource-0.6.55-2.el8.s390x.rpm\naccountsservice-devel-0.6.55-2.el8.s390x.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.s390x.rpm\ngnome-software-debuginfo-3.36.1-10.el8.s390x.rpm\ngnome-software-debugsource-3.36.1-10.el8.s390x.rpm\ngnome-software-devel-3.36.1-10.el8.s390x.rpm\nmutter-debuginfo-3.32.2-60.el8.s390x.rpm\nmutter-debugsource-3.32.2-60.el8.s390x.rpm\nmutter-devel-3.32.2-60.el8.s390x.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.s390x.rpm\n\nx86_64:\nLibRaw-debuginfo-0.19.5-3.el8.i686.rpm\nLibRaw-debuginfo-0.19.5-3.el8.x86_64.rpm\nLibRaw-debugsource-0.19.5-3.el8.i686.rpm\nLibRaw-debugsource-0.19.5-3.el8.x86_64.rpm\nLibRaw-devel-0.19.5-3.el8.i686.rpm\nLibRaw-devel-0.19.5-3.el8.x86_64.rpm\nLibRaw-samples-debuginfo-0.19.5-3.el8.i686.rpm\nLibRaw-samples-debuginfo-0.19.5-3.el8.x86_64.rpm\naccountsservice-debuginfo-0.6.55-2.el8.i686.rpm\naccountsservice-debuginfo-0.6.55-2.el8.x86_64.rpm\naccountsservice-debugsource-0.6.55-2.el8.i686.rpm\naccountsservice-debugsource-0.6.55-2.el8.x86_64.rpm\naccountsservice-devel-0.6.55-2.el8.i686.rpm\naccountsservice-devel-0.6.55-2.el8.x86_64.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.i686.rpm\naccountsservice-libs-debuginfo-0.6.55-2.el8.x86_64.rpm\ngnome-software-3.36.1-10.el8.i686.rpm\ngnome-software-debuginfo-3.36.1-10.el8.i686.rpm\ngnome-software-debuginfo-3.36.1-10.el8.x86_64.rpm\ngnome-software-debugsource-3.36.1-10.el8.i686.rpm\ngnome-software-debugsource-3.36.1-10.el8.x86_64.rpm\ngnome-software-devel-3.36.1-10.el8.i686.rpm\ngnome-software-devel-3.36.1-10.el8.x86_64.rpm\nmutter-debuginfo-3.32.2-60.el8.i686.rpm\nmutter-debuginfo-3.32.2-60.el8.x86_64.rpm\nmutter-debugsource-3.32.2-60.el8.i686.rpm\nmutter-debugsource-3.32.2-60.el8.x86_64.rpm\nmutter-devel-3.32.2-60.el8.i686.rpm\nmutter-devel-3.32.2-60.el8.x86_64.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.i686.rpm\nmutter-tests-debuginfo-3.32.2-60.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-13558\nhttps://access.redhat.com/security/cve/CVE-2020-24870\nhttps://access.redhat.com/security/cve/CVE-2020-27918\nhttps://access.redhat.com/security/cve/CVE-2020-29623\nhttps://access.redhat.com/security/cve/CVE-2020-36241\nhttps://access.redhat.com/security/cve/CVE-2021-1765\nhttps://access.redhat.com/security/cve/CVE-2021-1788\nhttps://access.redhat.com/security/cve/CVE-2021-1789\nhttps://access.redhat.com/security/cve/CVE-2021-1799\nhttps://access.redhat.com/security/cve/CVE-2021-1801\nhttps://access.redhat.com/security/cve/CVE-2021-1844\nhttps://access.redhat.com/security/cve/CVE-2021-1870\nhttps://access.redhat.com/security/cve/CVE-2021-1871\nhttps://access.redhat.com/security/cve/CVE-2021-21775\nhttps://access.redhat.com/security/cve/CVE-2021-21779\nhttps://access.redhat.com/security/cve/CVE-2021-21806\nhttps://access.redhat.com/security/cve/CVE-2021-28650\nhttps://access.redhat.com/security/cve/CVE-2021-30663\nhttps://access.redhat.com/security/cve/CVE-2021-30665\nhttps://access.redhat.com/security/cve/CVE-2021-30682\nhttps://access.redhat.com/security/cve/CVE-2021-30689\nhttps://access.redhat.com/security/cve/CVE-2021-30720\nhttps://access.redhat.com/security/cve/CVE-2021-30734\nhttps://access.redhat.com/security/cve/CVE-2021-30744\nhttps://access.redhat.com/security/cve/CVE-2021-30749\nhttps://access.redhat.com/security/cve/CVE-2021-30758\nhttps://access.redhat.com/security/cve/CVE-2021-30795\nhttps://access.redhat.com/security/cve/CVE-2021-30797\nhttps://access.redhat.com/security/cve/CVE-2021-30799\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYYrdm9zjgjWX9erEAQhgIA/+KzLn8QVHI3X8x9ufH1+nO8QXQqwTGQ0E\nawNXP8h4qsL7EGugHrz/KVjwaKJs/erPxh5jGl/xE1ZhngGlyStUpQkI2Y3cP2/3\n05jDPPS0QEfG5Y0rlnESyPxtwQTCpqped5P7L8VtKuzRae1HV63onsBB8zpcIFF7\nsTKcP6wAAjJDltUjlhnEkkE3G6Dxfv14/UowRAWoT9pa9cP0+KqdhuYKHdt3fCD7\ntEItM/SFQGoCF8zvXbvAiUXfZsQ/t/Yik9O6WISTWenaxCcP43Xn7aicsvZMVOvQ\nw+jnH/hnMLBoPhH2k4PClsDapa/D6IrQIUrwxtgfbC4KRs0fbdrEGCPqs4nl/AdD\nMigcf4gCMBq0bk3/yKp+/bi+OWwRMmw3ZdkJsOTNrOAkK1UCyrpF1ULyfs+8/OC5\nQnXW88fPCwhFj+KSAq5Cqfwm3hrKTCWIT/T1DQBG+J7Y9NgEx+zEXVmWaaA0z+7T\nqji5aUsIH+TG3t1EwtXABWGGEBRxC+svUoWNJBW1u6qwxfMx5E+hHUHhRewVYLYu\nSToRXa3cIX23M/XyHNXBgMCpPPw8DeY5aAA1fvKQsuMCLywDg0N3mYhvk1HUNidb\nZ6HmsLjLrGbkb1AAhP0V0wUuh5P6YJlL6iM49fQgztlHoBO0OAo56GBjAyT3pAAX\n2rgR2Ny0wo4=gfrM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. \n\nSecurity Fix(es):\n\n* mig-controller: incorrect namespaces handling may lead to not authorized\nusage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-05-25-7 tvOS 14.6\n\ntvOS 14.6 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212532. \n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30707: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Parsing a maliciously crafted audio file may lead to\ndisclosure of user information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30685: Mickey Jin (@patch1t) of Trend Micro\n\nCoreAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted audio file may disclose\nrestricted memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30686: Mickey Jin of Trend Micro\n\nCrash Reporter\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30727: Cees Elzinga\n\nCVMS\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A local attacker may be able to elevate  their privileges\nDescription: This issue was addressed with improved checks. \nCVE-2021-30724: Mickey Jin (@patch1t) of Trend Micro\n\nHeimdal\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A local user may be able to leak sensitive user information\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30697: Gabe Kirkpatrick (@gabe_k)\n\nHeimdal\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may cause a denial of service or\npotentially disclose memory contents\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30710: Gabe Kirkpatrick (@gabe_k)\n\nImageIO\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted image may lead to disclosure\nof user information\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30687: Hou JingYi (@hjy79425575) of Qihoo 360\n\nImageIO\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted image may lead to disclosure\nof user information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30700: Ye Zhang(@co0py_Cat) of Baidu Security\n\nImageIO\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30701: Mickey Jin (@patch1t) of Trend Micro and Ye Zhang of\nBaidu Security\n\nImageIO\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted ASTC file may disclose\nmemory contents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30705: Ye Zhang of Baidu Security\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30740: Linus Henze (pinauten.de)\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30704: an anonymous researcher\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted message may lead to a denial\nof service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30715: The UK\u0027s National Cyber Security Centre (NCSC)\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A buffer overflow was addressed with improved size\nvalidation. \nCVE-2021-30736: Ian Beer of Google Project Zero\n\nLaunchServices\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: This issue was addressed with improved environment\nsanitization. \nCVE-2021-30677: Ron Waisberg (@epsilan)\n\nSecurity\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted certificate may lead to\narbitrary code execution\nDescription: A memory corruption issue in the ASN.1 decoder was\naddressed by removing the vulnerable code. \nCVE-2021-30665: yangkang (@dnpushme)\u0026zerokeeper\u0026bianliang of 360 ATA\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A cross-origin issue with iframe elements was addressed\nwith improved tracking of security origins. \nCVE-2021-21779: Marcin Towalski of Cisco Talos\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30682: an anonymous researcher and 1lastBr3ath\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab,\nASU. working with Trend Micro Zero Day Initiative\nCVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems)\nworking with Trend Micro Zero Day Initiative\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious website may be able to access restricted ports on\narbitrary servers\nDescription: A logic issue was addressed with improved restrictions. \nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2021-30663: an anonymous researcher\n\nAdditional recognition\n\nImageIO\nWe would like to acknowledge Jzhu working with Trend Micro Zero Day\nInitiative and an anonymous researcher for their assistance. \n\nWebKit\nWe would like to acknowledge Chris Salls (@salls) of Makai Security\nfor their assistance. \n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting\n\"Settings -\u003e System -\u003e Software Update -\u003e Update Software.\"\n\nTo check the current version of software, select\n\"Settings -\u003e General -\u003e About.\"\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmCtU9MACgkQZcsbuWJ6\njjBzuhAAmXJik2L+PmRMzs6dd1QcCSwHYi0KLG0ERapHKJsFcm5+xpv87a4AFO4p\n3E6+5w9wQSWVEsQG1PIvuyV3M81xuu8xY88tAD1ce1qGA4Dny4E7RU08Y0l43j/x\nd1RemCf0TjwYpvX34/GaOspxFQYnRo1gWsU1v7bieF8vMHZmUOlgiNep0UEG3Kuq\n7IAAsfzWS43a+nkefSDWEujMNwbg1SZKua/+BXgZC7AOXdAHItqyNBFIerUc2uSf\nReHLZ5BNBKw9OsL9qoJsiLCmwxKrpUTzpQahu2gybZf65nza6QPOTohqqWq79EOD\nmIqOW4SQ5mVSrzMh+GB9EovMY+l5YgyHwObTUjRW+4znLU7fqNXBgwzgWoIpJdF0\nrpkjP3phOGXZWwiBhRmm5iYI08HFoBfF+EoPFN5Ucl7ZWz2uF0bQlbp3yqRoGRaO\nZWY2LzPIdP5zSq7rqXDaVnNFuKF93J4ouZZwVMXA4yf5wmQ3silIeJlvxxphlet8\noXv2pkewq9A81RGMlgMDZMvawQvPGkOVgeBm1coajN1swNY8esW7N6J1+rtDL0mI\nsulaGZCeSM9ndg5VRU2lpClFdGEUZXT2hZ8NoMV6jj48c0gZBW3M82snGD4zeRqM\ndcezqg6o22ZxpogRJuRf41Y87ktE5o73wgj0xu72MQoxK86+Ek0=\n=BeQR\n-----END PGP SIGNATURE-----\n\n\n. \nCVE-2021-30663: an anonymous researcher\n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202202-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: February 01, 2022\n     Bugs: #779175, #801400, #813489, #819522, #820434, #829723,\n           #831739\n       ID: 202202-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nBackground\n=========\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from hybrid\nHTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk        \u003c 2.34.4                    \u003e= 2.34.4\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebkitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.34.4\"\n\nReferences\n=========\n[ 1 ] CVE-2021-30848\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30848\n[ 2 ] CVE-2021-30888\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30888\n[ 3 ] CVE-2021-30682\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30682\n[ 4 ] CVE-2021-30889\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30889\n[ 5 ] CVE-2021-30666\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30666\n[ 6 ] CVE-2021-30665\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30665\n[ 7 ] CVE-2021-30890\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30890\n[ 8 ] CVE-2021-30661\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30661\n[ 9 ] WSA-2021-0005\n      https://webkitgtk.org/security/WSA-2021-0005.html\n[ 10 ] CVE-2021-30761\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30761\n[ 11 ] CVE-2021-30897\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30897\n[ 12 ] CVE-2021-30823\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30823\n[ 13 ] CVE-2021-30734\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30734\n[ 14 ] CVE-2021-30934\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30934\n[ 15 ] CVE-2021-1871\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1871\n[ 16 ] CVE-2021-30762\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30762\n[ 17 ] WSA-2021-0006\n      https://webkitgtk.org/security/WSA-2021-0006.html\n[ 18 ] CVE-2021-30797\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30797\n[ 19 ] CVE-2021-30936\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30936\n[ 20 ] CVE-2021-30663\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30663\n[ 21 ] CVE-2021-1825\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1825\n[ 22 ] CVE-2021-30951\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30951\n[ 23 ] CVE-2021-30952\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30952\n[ 24 ] CVE-2021-1788\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1788\n[ 25 ] CVE-2021-1820\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1820\n[ 26 ] CVE-2021-30953\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30953\n[ 27 ] CVE-2021-30749\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30749\n[ 28 ] CVE-2021-30849\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30849\n[ 29 ] CVE-2021-1826\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1826\n[ 30 ] CVE-2021-30836\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30836\n[ 31 ] CVE-2021-30954\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30954\n[ 32 ] CVE-2021-30984\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30984\n[ 33 ] CVE-2021-30851\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30851\n[ 34 ] CVE-2021-30758\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30758\n[ 35 ] CVE-2021-42762\n      https://nvd.nist.gov/vuln/detail/CVE-2021-42762\n[ 36 ] CVE-2021-1844\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1844\n[ 37 ] CVE-2021-30689\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30689\n[ 38 ] CVE-2021-45482\n      https://nvd.nist.gov/vuln/detail/CVE-2021-45482\n[ 39 ] CVE-2021-30858\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30858\n[ 40 ] CVE-2021-21779\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21779\n[ 41 ] WSA-2021-0004\n       https://webkitgtk.org/security/WSA-2021-0004.html\n[ 42 ] CVE-2021-30846\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30846\n[ 43 ] CVE-2021-30744\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30744\n[ 44 ] CVE-2021-30809\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30809\n[ 45 ] CVE-2021-30884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30884\n[ 46 ] CVE-2021-30720\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30720\n[ 47 ] CVE-2021-30799\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30799\n[ 48 ] CVE-2021-30795\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30795\n[ 49 ] CVE-2021-1817\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1817\n[ 50 ] CVE-2021-21775\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21775\n[ 51 ] CVE-2021-30887\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30887\n[ 52 ] CVE-2021-21806\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21806\n[ 53 ] CVE-2021-30818\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30818\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202202-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30665"
      },
      {
        "db": "PACKETSTORM",
        "id": "164872"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "162825"
      },
      {
        "db": "PACKETSTORM",
        "id": "162448"
      },
      {
        "db": "PACKETSTORM",
        "id": "162449"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      }
    ],
    "trust": 2.34
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-390398",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-30665",
        "trust": 4.0
      },
      {
        "db": "PACKETSTORM",
        "id": "162825",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "165794",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164872",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162447",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2021072711",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021050503",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021072919",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021080506",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021052503",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1795",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0245",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2563",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3779",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2787",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2622",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1501",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "162449",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "162448",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "162452",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-390398",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30665",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165631",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30665"
      },
      {
        "db": "PACKETSTORM",
        "id": "164872"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "162825"
      },
      {
        "db": "PACKETSTORM",
        "id": "162448"
      },
      {
        "db": "PACKETSTORM",
        "id": "162449"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "id": "VAR-202109-1315",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:11:59.868000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT212341 Apple\u00a0 Security update",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT212335"
      },
      {
        "title": "Apple Safari Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=149091"
      },
      {
        "title": "BleepingComputer",
        "trust": 0.1,
        "url": "https://www.bleepingcomputer.com/news/apple/apple-fixes-2-ios-zero-day-vulnerabilities-actively-used-in-attacks/"
      },
      {
        "title": null,
        "trust": 0.1,
        "url": "https://www.theregister.co.uk/2021/05/04/in_brief_security/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-30665"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.1
      },
      {
        "problemtype": "Out-of-bounds writing (CWE-787) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30665"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212335"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212336"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212339"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212341"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht212532"
      },
      {
        "trust": 1.0,
        "url": "https://www.cisa.gov/known-exploited-vulnerabilities-catalog?field_cve=cve-2021-30665"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2021-30665"
      },
      {
        "trust": 0.8,
        "url": "https://cisa.gov/known-exploited-vulnerabilities-catalog"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021080506"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-macos-two-vulnerabilities-via-webkit-35223"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021052503"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162825/apple-security-advisory-2021-05-25-7.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162447/apple-security-advisory-2021-05-03-2.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021072711"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3779"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1501"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2622"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2787"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164872/red-hat-security-advisory-2021-4381-05.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-multiple-vulnerabilities-36009"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1795"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212340"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2563"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021050503"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021072919"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165794/gentoo-linux-security-advisory-202202-01.html"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30663"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21779"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30689"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30744"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30749"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30720"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30682"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30734"
      },
      {
        "trust": 0.3,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.3,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21775"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30744"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1844"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21775"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30795"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1871"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21806"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30734"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1871"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30758"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1870"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1801"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1844"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36241"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30797"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1765"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30720"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13558"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28650"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24870"
      },
      {
        "trust": 0.2,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1799"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30758"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21779"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1789"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27918"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30749"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30795"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30663"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-1788"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30799"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30689"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21806"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1788"
      },
      {
        "trust": 0.2,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/fulldisclosure/2021/may/7"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1765"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4381"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1801"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1870"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24870"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1789"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36241"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28650"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35522"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3575"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15389"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-5727"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14145"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41617"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12973"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-18032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-4658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20847"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-5785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27814"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20266"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27842"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10001"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3948"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27828"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29338"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35523"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26926"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3796"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0202"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30715"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30740"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30705"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212532."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30697"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30685"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30737"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30704"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30736"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30677"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30727"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30724"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30700"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30701"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212336."
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/downloads/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212335."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0005.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30858"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1817"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42762"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30799"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202202-01"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30797"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0004.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0006.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30665"
      },
      {
        "db": "PACKETSTORM",
        "id": "164872"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "162825"
      },
      {
        "db": "PACKETSTORM",
        "id": "162448"
      },
      {
        "db": "PACKETSTORM",
        "id": "162449"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30665"
      },
      {
        "db": "PACKETSTORM",
        "id": "164872"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "162825"
      },
      {
        "db": "PACKETSTORM",
        "id": "162448"
      },
      {
        "db": "PACKETSTORM",
        "id": "162449"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-09-08T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "date": "2021-11-10T17:09:58",
        "db": "PACKETSTORM",
        "id": "164872"
      },
      {
        "date": "2022-01-20T17:48:29",
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "date": "2021-05-26T17:50:13",
        "db": "PACKETSTORM",
        "id": "162825"
      },
      {
        "date": "2021-05-04T16:23:46",
        "db": "PACKETSTORM",
        "id": "162448"
      },
      {
        "date": "2021-05-04T16:23:57",
        "db": "PACKETSTORM",
        "id": "162449"
      },
      {
        "date": "2022-02-01T17:03:05",
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "date": "2021-05-04T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "date": "2022-09-14T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "date": "2021-09-08T15:15:13.507000",
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-05-03T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390398"
      },
      {
        "date": "2022-05-06T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      },
      {
        "date": "2024-05-31T06:13:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      },
      {
        "date": "2025-10-23T14:55:47.683000",
        "db": "NVD",
        "id": "CVE-2021-30665"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural \u00a0Apple\u00a0 Out-of-bounds write vulnerabilities in the product",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013501"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-060"
      }
    ],
    "trust": 0.6
  }
}

VAR-202010-1294

Vulnerability from variot - Updated: 2025-12-22 23:07

A use after free issue was addressed with improved memory management. This issue is fixed in iOS 13.6 and iPadOS 13.6, tvOS 13.4.8, watchOS 6.2.8, Safari 13.1.2, iTunes 12.10.8 for Windows, iCloud for Windows 11.3, iCloud for Windows 7.20. A remote attacker may be able to cause unexpected application termination or arbitrary code execution. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.The specific flaw exists within the RenderWidget class. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker can leverage this vulnerability to execute code in the context of the current process. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple iOS prior to 13.6; iPadOS prior to 13.6; tvOS prior to 13.4.8; watchOS prior to 6.2.8; Safari prior to 13.1.2; Windows-based iTunes prior to 12.10.8.

CVE-2020-9915

Ayoub Ait Elmokhtar discovered that processing maliciously crafted
web content may prevent Content Security Policy from being
enforced.

CVE-2020-9925

An anonymous researcher discovered that processing maliciously
crafted web content may lead to universal cross site scripting.

For the stable distribution (buster), these problems have been fixed in version 2.28.4-1~deb10u1.

We recommend that you upgrade your webkit2gtk packages.

For the detailed security status of webkit2gtk please refer to its security tracker page at: https://security-tracker.debian.org/tracker/webkit2gtk

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAl8oKD5fFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0SagQ//ZltrMBv+gUHbNMk4foYl7hv9PmHRd0U+KR4sdAPvhF+UWYSTJbAbwH1b PKTByJixoZfCntg/KMd9f16EHX13TbBw5M4rnZ7/4oM5AZfUifGwEwAOH5jK1/IS 1p/CnGlfTc3i5tZhI0xKkSLruSSVswzuikRUQ/4DaC/tRUbzGs67U2iuQ8Z4e8A3 vIe6/P+y6svAsbaSbCBKi72IKzLzkqraBUVXjSgs17xnzkuaqeCBFBQRYDII6gm1 cY0mZd5MknDjc3BrNBhpGA0VJ3SI+6RhM7k7oUKcCCoL2Q6c/PPipSqcLZYita+u csmvikeusNMOv8Z6JKwvvbjWv6A199x8ddZgjIuQIIWJA4xHbOqJCyzwn9YBEdpS 7DG/VGFRAJW2O7FHBTE04wSjOwxHuRcXra6Yc9Ty80BaWuPOJ/FlfD2X+ojub+i5 L6FHBhR0eQ2e7zW5xd6GnF/WbNYHF09K2qblCRw/DdyL2TLAtVS02JAriUUK0Bwl 3HAtF8a1EjIudabf/uENDq+ZHzd5zbl6maSOKH9e8ajaFFk4wAcRsHbs13v4nnqK cYuD+n7WHv3uU6VTfRh2nLkyIbR3udoDq1MpJsatuWHNRwnYVI82c/EjsBDkznG6 yYy3d+RnnexDgF4rZ7XuOFK52JEpr5whTGg/Nn7wVw8YQUXXGO0= =ANGc -----END PGP SIGNATURE----- . Summary:

An update is now available for Service Telemetry Framework 1.4 for RHEL 8. Description:

Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. 1865938 - CSIDrivers missing in OCS 4.6 1867024 - [ocs-operator] operator v4.6.0-519.ci is in Installing state 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868060 - [External Cluster] Noobaa-default-backingstore PV in released state upon OCS 4.5 uninstall (Secret not found) 1868703 - [rbd] After volume expansion, the new size is not reflected on the pod 1869411 - capture full crash information from ceph 1870061 - [RHEL][IBM] OCS un-install should make the devices raw 1870338 - OCS 4.6 must-gather : ocs-must-gather-xxx-helper pod in ContainerCreationError (couldn't find key admin-secret) 1870631 - OCS 4.6 Deployment : RGW pods went into 'CrashLoopBackOff' state on Z Platform 1872119 - Updates don't work on StorageClass which will keep PV expansion disabled for upgraded cluster 1872696 - [ROKS][RFE]NooBaa Configure IBM COS as default backing store 1873864 - Noobaa: On an baremetal RHCOS cluster, some backingstores are stuck in PROGRESSING state with INVALID_ENDPOINT TemporaryError 1874606 - CVE-2020-7720 nodejs-node-forge: prototype pollution via the util.setPath function 1875476 - Change noobaa logo in the noobaa UI 1877339 - Incorrect use of logr 1877371 - NooBaa UI warning message on Deploy Kubernetes Pool process - typo and shown number is incorrect 1878153 - OCS 4.6 must-gather: collect node information under cluster_scoped_resources/oc_output directory 1878714 - [FIPS enabled] BadDigest error on file upload to noobaa bucket 1878853 - [External Mode] ceph-external-cluster-details-exporter.py does not tolerate TLS enabled RGW 1879008 - ocs-osd-removal job fails because it can't find admin-secret in rook-ceph-mon secret 1879072 - Deployment with encryption at rest is failing to bring up OSD pods 1879919 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed 1880255 - Collect rbd info and subvolume info and snapshot info command output 1881028 - CVE-2020-8237 nodejs-json-bigint: Prototype pollution via __proto__ assignment could result in DoS 1881071 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed 1882397 - MCG decompression problem with snappy on s390x arch 1883253 - CSV doesn't contain values required for UI to enable minimal deployment and cluster encryption 1883398 - Update csi sidecar containers in rook 1883767 - Using placement strategies in cluster-service.yaml causes ocs-operator to crash 1883810 - [External mode] RGW metrics is not available after OCS upgrade from 4.5 to 4.6 1883927 - Deployment with encryption at rest is failing to bring up OSD pods 1885175 - Handle disappeared underlying device for encrypted OSD 1885428 - panic seen in rook-ceph during uninstall - "close of closed channel" 1885648 - [Tracker for https://bugzilla.redhat.com/show_bug.cgi?id=1885700] FSTYPE for localvolumeset devices shows up as ext2 after uninstall 1885971 - ocs-storagecluster-cephobjectstore doesn't report true state of RGW 1886308 - Default VolumeSnapshot Classes not created in External Mode 1886348 - osd removal job failed with status "Error" 1886551 - Clone creation failed after timeout of 5 hours of Azure platrom for 3 CephFS PVCs ( PVC sizes: 1, 25 and 100 GB) 1886709 - [External] RGW storageclass disappears after upgrade from OCS 4.5 to 4.6 1886859 - OCS 4.6: Uninstall stuck indefinitely if any Ceph pods are in Pending state before uninstall 1886873 - [OCS 4.6 External/Internal Uninstall] - Storage Cluster deletion stuck indefinitely, "failed to delete object store", remaining users: [noobaa-ceph-objectstore-user] 1888583 - [External] When deployment is attempted without specifying the monitoring-endpoint while generating JSON, the CSV is stuck in installing state 1888593 - [External] Add validation for monitoring-endpoint and port in the exporter script 1888614 - [External] Unreachable monitoring-endpoint used during deployment causes ocs-operator to crash 1889441 - Traceback error message while running OCS 4.6 must-gather 1889683 - [GSS] Noobaa Problem when setting public access to a bucket 1889866 - Post node power off/on, an unused MON PVC still stays back in the cluster 1890183 - [External] ocs-operator logs are filled with "failed to reconcile metrics exporter" 1890638 - must-gather helper pod should be deleted after collecting ceph crash info 1890971 - [External] RGW metrics are not available if anything else except 9283 is provided as the monitoring-endpoint-port 1891856 - ocs-metrics-exporter pod should have tolerations for OCS taint 1892206 - [GSS] Ceph image/version mismatch 1892234 - clone #95 creation failed for CephFS PVC ( 10 GB PVC size) during multiple clones creation test 1893624 - Must Gather is not collecting the tar file from NooBaa diagnose 1893691 - OCS4.6 must_gather failes to complete in 600sec 1893714 - Bad response for upload an object with encryption 1895402 - Mon pods didn't get upgraded in 720 second timeout from OCS 4.5 upgrade to 4.6 1896298 - [RFE] Monitoring for Namespace buckets and resources 1896831 - Clone#452 for RBD PVC ( PVC size 1 GB) failed to be created for 600 secs 1898521 - [CephFS] Deleting cephfsplugin pod along with app pods will make PV remain in Released state after deleting the PVC 1902627 - must-gather should wait for debug pods to be in ready state 1904171 - RGW Service is unavailable for a short period during upgrade to OCS 4.6

  1. Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:0055

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
  • grafana: Snapshot authentication bypass (CVE-2021-39226)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
  • grafana: directory traversal vulnerability (CVE-2021-43813)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64

The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x

The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le

The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform 1976894 - Unidling a StatefulSet does not work as expected 1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases 1977414 - Build Config timed out waiting for condition 400: Bad Request 1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus 1978528 - systemd-coredump started and failed intermittently for unknown reasons 1978581 - machine-config-operator: remove runlevel from mco namespace 1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable 1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9 1979966 - OCP builds always fail when run on RHEL7 nodes 1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading 1981549 - Machine-config daemon does not recover from broken Proxy configuration 1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel] 1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues 1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page 1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands 1982662 - Workloads - DaemonSets - Add storage: i18n misses 1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters 1983758 - upgrades are failing on disruptive tests 1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma" 1984592 - global pull secret not working in OCP4.7.4+ for additional private registries 1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs 1985486 - Cluster Proxy not used during installation on OSP with Kuryr 1985724 - VM Details Page missing translations 1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted 1985933 - Downstream image registry recommendation 1985965 - oVirt CSI driver does not report volume stats 1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding" 1986237 - "MachineNotYetDeleted" in Pending state , alert not fired 1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid" 1986302 - console continues to fetch prometheus alert and silences for normal user 1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI 1986338 - error creating list of resources in Import YAML 1986502 - yaml multi file dnd duplicates previous dragged files 1986819 - fix string typos for hot-plug disks 1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure 1987136 - Declare operatorframework.io/arch. labels for all operators 1987257 - Go-http-client user-agent being used for oc adm mirror requests 1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold 1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP 1988406 - SSH key dropped when selecting "Customize virtual machine" in UI 1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade 1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs 1989438 - expected replicas is wrong 1989502 - Developer Catalog is disappearing after short time 1989843 - 'More' and 'Show Less' functions are not translated on several page 1990014 - oc debug does not work for Windows pods 1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created 1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page 1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar 1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI 1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks 1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var 1990625 - Ironic agent registers with SLAAC address with privacy-stable 1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time 1991067 - github.com can not be resolved inside pods where cluster is running on openstack. 1991573 - Enable typescript strictNullCheck on network-policies files 1991641 - Baremetal Cluster Operator still Available After Delete Provisioning 1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator 1991819 - Misspelled word "ocurred" in oc inspect cmd 1991942 - Alignment and spacing fixes 1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked 1992453 - The configMap failed to save on VM environment tab 1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab 1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab 1992509 - Could not customize boot source due to source PVC not found 1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1992580 - storageProfile should stay with the same value by check/uncheck the apply button 1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply 1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios 1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed) 1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing 1994094 - Some hardcodes are detected at the code level in OpenShift console components 1994142 - Missing required cloud config fields for IBM Cloud 1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools 1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart 1995335 - [SCALE] ovnkube CNI: remove ovs flows check 1995493 - Add Secret to workload button and Actions button are not aligned on secret details page 1995531 - Create RDO-based Ironic image to be promoted to OKD 1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator 1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread 1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole 1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN 1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down 1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page 1996647 - Provide more useful degraded message in auth operator on DNS errors 1996736 - Large number of 501 lr-policies in INCI2 env 1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes 1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP 1996928 - Enable default operator indexes on ARM 1997028 - prometheus-operator update removes env var support for thanos-sidecar 1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used 1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. 1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI 1997269 - Have to refresh console to install kube-descheduler 1997478 - Storage operator is not available after reboot cluster instances 1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 1997967 - storageClass is not reserved from default wizard to customize wizard 1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order 1998038 - [e2e][automation] add tests for UI for VM disk hot-plug 1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus 1998174 - Create storageclass gp3-csi after install ocp cluster on aws 1998183 - "r: Bad Gateway" info is improper 1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected 1998377 - Filesystem table head is not full displayed in disk tab 1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory 1998519 - Add fstype when create localvolumeset instance on web console 1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses 1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page 1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable 1999091 - Console update toast notification can appear multiple times 1999133 - removing and recreating static pod manifest leaves pod in error state 1999246 - .indexignore is not ingore when oc command load dc configuration 1999250 - ArgoCD in GitOps operator can't manage namespaces 1999255 - ovnkube-node always crashes out the first time it starts 1999261 - ovnkube-node log spam (and security token leak?) 1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page 1999314 - console-operator is slow to mark Degraded as False once console starts working 1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck) 1999556 - "master" pool should be updated before the CVO reports available at the new version occurred 1999578 - AWS EFS CSI tests are constantly failing 1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages 1999619 - cloudinit is malformatted if a user sets a password during VM creation flow 1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow 1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined 1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) 1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource 1999771 - revert "force cert rotation every couple days for development" in 4.10 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. 1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions 1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form 1999983 - No way to clear upload error from template boot source 2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter 2000096 - Git URL is not re-validated on edit build-config form reload 2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig 2000236 - Confusing usage message from dynkeepalived CLI 2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported 2000430 - bump cluster-api-provider-ovirt version in installer 2000450 - 4.10: Enable static PV multi-az test 2000490 - All critical alerts shipped by CMO should have links to a runbook 2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded) 2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster 2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled 2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console 2000754 - IPerf2 tests should be lower 2000846 - Structure logs in the entire codebase of Local Storage Operator 2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24 2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2000938 - CVO does not respect changes to a Deployment strategy 2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy 2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone 2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole 2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error 2001337 - Details Card in ODF Dashboard mentions OCS 2001339 - fix text content hotplug 2001413 - [e2e][automation] add/delete nic and disk to template 2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log 2001442 - Empty termination.log file for the kube-apiserver has too permissive mode 2001479 - IBM Cloud DNS unable to create/update records 2001566 - Enable alerts for prometheus operator in UWM 2001575 - Clicking on the perspective switcher shows a white page with loader 2001577 - Quick search placeholder is not displayed properly when the search string is removed 2001578 - [e2e][automation] add tests for vm dashboard tab 2001605 - PVs remain in Released state for a long time after the claim is deleted 2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options 2001620 - Cluster becomes degraded if it can't talk to Manila 2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF 2001761 - Unable to apply cluster operator storage for SNO on GCP platform. 2001765 - Some error message in the log of diskmaker-manager caused confusion 2001784 - show loading page before final results instead of showing a transient message No log files exist 2001804 - Reload feature on Environment section in Build Config form does not work properly 2001810 - cluster admin unable to view BuildConfigs in all namespaces 2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well 2001823 - OCM controller must update operator status 2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start 2001835 - Could not select image tag version when create app from dev console 2001855 - Add capacity is disabled for ocs-storagecluster 2001856 - Repeating event: MissingVersion no image found for operand pod 2001959 - Side nav list borders don't extend to edges of container 2002007 - Layout issue on "Something went wrong" page 2002010 - ovn-kube may never attempt to retry a pod creation 2002012 - Cannot change volume mode when cloning a VM from a template 2002027 - Two instances of Dotnet helm chart show as one in topology 2002075 - opm render does not automatically pulling in the image(s) used in the deployments 2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster 2002125 - Network policy details page heading should be updated to Network Policy details 2002133 - [e2e][automation] add support/virtualization and improve deleteResource 2002134 - [e2e][automation] add test to verify vm details tab 2002215 - Multipath day1 not working on s390x 2002238 - Image stream tag is not persisted when switching from yaml to form editor 2002262 - [vSphere] Incorrect user agent in vCenter sessions list 2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector 2002276 - OLM fails to upgrade operators immediately 2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods 2002354 - Missing DU configuration "Done" status reporting during ZTP flow 2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs 2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation 2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN 2002397 - Resources search is inconsistent 2002434 - CRI-O leaks some children PIDs 2002443 - Getting undefined error on create local volume set page 2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy 2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods. 2002559 - User preference for topology list view does not follow when a new namespace is created 2002567 - Upstream SR-IOV worker doc has broken links 2002588 - Change text to be sentence case to align with PF 2002657 - ovn-kube egress IP monitoring is using a random port over the node network 2002713 - CNO: OVN logs should have millisecond resolution 2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event 2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite 2002763 - Two storage systems getting created with external mode RHCS 2002808 - KCM does not use web identity credentials 2002834 - Cluster-version operator does not remove unrecognized volume mounts 2002896 - Incorrect result return when user filter data by name on search page 2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- " 2003096 - [e2e][automation] check bootsource URL is displaying on review step 2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role 2003120 - CI: Uncaught error with ResizeObserver on operand details page 2003145 - Duplicate operand tab titles causes "two children with the same key" warning 2003164 - OLM, fatal error: concurrent map writes 2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form 2003193 - Kubelet/crio leaks netns and veth ports in the host 2003195 - OVN CNI should ensure host veths are removed 2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images 2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI 2003244 - Revert libovsdb client code 2003251 - Patternfly components with list element has list item bullet when they should not. 2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI 2003269 - Rejected pods should be filtered from admission regression 2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release 2003426 - [e2e][automation] add test for vm details bootorder 2003496 - [e2e][automation] add test for vm resources requirment settings 2003641 - All metal ipi jobs are failing in 4.10 2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state 2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node 2003683 - Samples operator is panicking in CI 2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page 2003715 - Error on creating local volume set after selection of the volume mode 2003743 - Remove workaround keeping /boot RW for kdump support 2003775 - etcd pod on CrashLoopBackOff after master replacement procedure 2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver 2003792 - Monitoring metrics query graph flyover panel is useless 2003808 - Add Sprint 207 translations 2003845 - Project admin cannot access image vulnerabilities view 2003859 - sdn emits events with garbage messages 2003896 - (release-4.10) ApiRequestCounts conditional gatherer 2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas 2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes 2004059 - [e2e][automation] fix current tests for downstream 2004060 - Trying to use basic spring boot sample causes crash on Firefox 2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection 2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently 2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver 2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory 2004449 - Boot option recovery menu prevents image boot 2004451 - The backup filename displayed in the RecentBackup message is incorrect 2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts 2004508 - TuneD issues with the recent ConfigParser changes. 2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions 2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs 2004578 - Monitoring and node labels missing for an external storage platform 2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days 2004596 - [4.10] Bootimage bump tracker 2004597 - Duplicate ramdisk log containers running 2004600 - Duplicate ramdisk log containers running 2004609 - output of "crictl inspectp" is not complete 2004625 - BMC credentials could be logged if they change 2004632 - When LE takes a large amount of time, multiple whereabouts are seen 2004721 - ptp/worker custom threshold doesn't change ptp events threshold 2004736 - [knative] Create button on new Broker form is inactive despite form being filled 2004796 - [e2e][automation] add test for vm scheduling policy 2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque 2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card 2004901 - [e2e][automation] improve kubevirt devconsole tests 2004962 - Console frontend job consuming too much CPU in CI 2005014 - state of ODF StorageSystem is misreported during installation or uninstallation 2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines 2005179 - pods status filter is not taking effect 2005182 - sync list of deprecated apis about to be removed 2005282 - Storage cluster name is given as title in StorageSystem details page 2005355 - setuptools 58 makes Kuryr CI fail 2005407 - ClusterNotUpgradeable Alert should be set to Severity Info 2005415 - PTP operator with sidecar api configured throws bind: address already in use 2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console 2005554 - The switch status of the button "Show default project" is not revealed correctly in code 2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 2005761 - QE - Implementing crw-basic feature file 2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow 2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty 2005854 - SSH NodePort service is created for each VM 2005901 - KS, KCM and KA going Degraded during master nodes upgrade 2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user 2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics 2005971 - Change telemeter to report the Application Services product usage metrics 2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files 2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased 2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types 2006101 - Power off fails for drivers that don't support Soft power off 2006243 - Metal IPI upgrade jobs are running out of disk space 2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address 2006308 - Backing Store YAML tab on click displays a blank screen on UI 2006325 - Multicast is broken across nodes 2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators 2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource 2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception" 2006714 - add retry for etcd errors in kube-apiserver 2006767 - KubePodCrashLooping may not fire 2006803 - Set CoreDNS cache entries for forwarded zones 2006861 - Add Sprint 207 part 2 translations 2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap 2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors 2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded 2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick 2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails 2007271 - CI Integration for Knative test cases 2007289 - kubevirt tests are failing in CI 2007322 - Devfile/Dockerfile import does not work for unsupported git host 2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2008521 - gcp-hostname service should correct invalid search entries in resolv.conf 2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount 2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror 2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers 2008599 - Azure Stack UPI does not have Internal Load Balancer 2008612 - Plugin asset proxy does not pass through browser cache headers 2008712 - VPA webhook timeout prevents all pods from starting 2008733 - kube-scheduler: exposed /debug/pprof port 2008911 - Prometheus repeatedly scaling prometheus-operator replica set 2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12 2009055 - Instances of OCS to be replaced with ODF on UI 2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs 2009083 - opm blocks pruning of existing bundles during add 2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances 2009131 - [e2e][automation] add more test about vmi 2009148 - [e2e][automation] test vm nic presets and options 2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator 2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family 2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted 2009384 - UI changes to support BindableKinds CRD changes 2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped 2009424 - Deployment upgrade is failing availability check 2009454 - Change web terminal subscription permissions from get to list 2009465 - container-selinux should come from rhel8-appstream 2009514 - Bump OVS to 2.16-15 2009555 - Supermicro X11 system not booting from vMedia with AI 2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points 2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow 2009699 - Failure to validate flavor RAM 2009754 - Footer is not sticky anymore in import forms 2009785 - CRI-O's version file should be pinned by MCO 2009791 - Installer: ibmcloud ignores install-config values 2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13 2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo 2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2009873 - Stale Logical Router Policies and Annotations for a given node 2009879 - There should be test-suite coverage to ensure admin-acks work as expected 2009888 - SRO package name collision between official and community version 2010073 - uninstalling and then reinstalling sriov-network-operator is not working 2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. 2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default 2013751 - Service details page is showing wrong in-cluster hostname 2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page 2013871 - Resource table headings are not aligned with their column data 2013895 - Cannot enable accelerated network via MachineSets on Azure 2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude" 2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG) 2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain 2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab) 2013996 - Project detail page: Action "Delete Project" does nothing for the default project 2014071 - Payload imagestream new tags not properly updated during cluster upgrade 2014153 - SRIOV exclusive pooling 2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace 2014238 - AWS console test is failing on importing duplicate YAML definitions 2014245 - Several aria-labels, external links, and labels aren't internationalized 2014248 - Several files aren't internationalized 2014352 - Could not filter out machine by using node name on machines page 2014464 - Unexpected spacing/padding below navigation groups in developer perspective 2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages 2014486 - Integration Tests: OLM single namespace operator tests failing 2014488 - Custom operator cannot change orders of condition tables 2014497 - Regex slows down different forms and creates too much recursion errors in the log 2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id' 2014614 - Metrics scraping requests should be assigned to exempt priority level 2014710 - TestIngressStatus test is broken on Azure 2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly 2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile 2015115 - [RFE] PCI passthrough 2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter 2015154 - Support ports defined networks and primarySubnet 2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic 2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production 2015386 - Possibility to add labels to the built-in OCP alerts 2015395 - Table head on Affinity Rules modal is not fully expanded 2015416 - CI implementation for Topology plugin 2015418 - Project Filesystem query returns No datapoints found 2015420 - No vm resource in project view's inventory 2015422 - No conflict checking on snapshot name 2015472 - Form and YAML view switch button should have distinguishable status 2015481 - [4.10] sriov-network-operator daemon pods are failing to start 2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting 2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English 2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2017427 - NTO does not restart TuneD daemon when profile application is taking too long 2017535 - Broken Argo CD link image on GitOps Details Page 2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references 2017564 - On-prem prepender dispatcher script overwrites DNS search settings 2017565 - CCMO does not handle additionalTrustBundle on Azure Stack 2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice 2017606 - [e2e][automation] add test to verify send key for VNC console 2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes 2017656 - VM IP address is "undefined" under VM details -> ssh field 2017663 - SSH password authentication is disabled when public key is not supplied 2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP 2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set 2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource 2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults 2017761 - [e2e][automation] dummy bug for 4.9 test dependency 2017872 - Add Sprint 209 translations 2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances 2017879 - Add Chinese translation for "alternate" 2017882 - multus: add handling of pod UIDs passed from runtime 2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods 2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI 2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS 2018094 - the tooltip length is limited 2018152 - CNI pod is not restarted when It cannot start servers due to ports being used 2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time 2018234 - user settings are saved in local storage instead of on cluster 2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?) 2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports) 2018275 - Topology graph doesn't show context menu for Export CSV 2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked 2018380 - Migrate docs links to access.redhat.com 2018413 - Error: context deadline exceeded, OCP 4.8.9 2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked 2018445 - [e2e][automation] enhance tests for downstream 2018446 - [e2e][automation] move tests to different level 2018449 - [e2e][automation] add test about create/delete network attachment definition 2018490 - [4.10] Image provisioning fails with file name too long 2018495 - Fix typo in internationalization README 2018542 - Kernel upgrade does not reconcile DaemonSet 2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit 2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes 2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950 2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10 2018985 - The rootdisk size is 15Gi of windows VM in customize wizard 2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. 2019096 - Update SRO leader election timeout to support SNO 2019129 - SRO in operator hub points to wrong repo for README 2019181 - Performance profile does not apply 2019198 - ptp offset metrics are not named according to the log output 2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest 2019284 - Stop action should not in the action list while VMI is not running 2019346 - zombie processes accumulation and Argument list too long 2019360 - [RFE] Virtualization Overview page 2019452 - Logger object in LSO appends to existing logger recursively 2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect 2019634 - Pause and migration is enabled in action list for a user who has view only permission 2019636 - Actions in VM tabs should be disabled when user has view only permission 2019639 - "Take snapshot" should be disabled while VM image is still been importing 2019645 - Create button is not removed on "Virtual Machines" page for view only user 2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user 2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user 2019717 - cant delete VM with un-owned pvc attached 2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass 2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always" 2019744 - [RFE] Suggest users to download newest RHEL 8 version 2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level 2019827 - Display issue with top-level menu items running demo plugin 2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded 2019886 - Kuryr unable to finish ports recovery upon controller restart 2019948 - [RFE] Restructring Virtualization links 2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster 2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout 2019986 - Dynamic demo plugin fails to build 2019992 - instance:node_memory_utilisation:ratio metric is incorrect 2020001 - Update dockerfile for demo dynamic plugin to reflect dir change 2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation. 2020107 - cluster-version-operator: remove runlevel from CVO namespace 2020153 - Creation of Windows high performance VM fails 2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public 2020250 - Replacing deprecated ioutil 2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build 2020275 - ClusterOperators link in console returns blank page during upgrades 2020377 - permissions error while using tcpdump option with must-gather 2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined 2020498 - "Show PromQL" button is disabled 2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature 2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI 2020664 - DOWN subports are not cleaned up 2020904 - When trying to create a connection from the Developer view between VMs, it fails 2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana 2021017 - 404 page not found error on knative eventing page 2021031 - QE - Fix the topology CI scripts 2021048 - [RFE] Added MAC Spoof check 2021053 - Metallb operator presented as community operator 2021067 - Extensive number of requests from storage version operator in cluster 2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes 2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass 2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node 2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating 2021152 - imagePullPolicy is "Always" for ptp operator images 2021191 - Project admins should be able to list available network attachment defintions 2021205 - Invalid URL in git import form causes validation to not happen on URL change 2021322 - cluster-api-provider-azure should populate purchase plan information 2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind 2021364 - Installer requires invalid AWS permission s3:GetBucketReplication 2021400 - Bump documentationBaseURL to 4.10 2021405 - [e2e][automation] VM creation wizard Cloud Init editor 2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected 2021466 - [e2e][automation] Windows guest tool mount 2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver 2021551 - Build is not recognizing the USER group from an s2i image 2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character 2021629 - api request counts for current hour are incorrect 2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page 2021693 - Modals assigned modal-lg class are no longer the correct width 2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines 2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled 2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags 2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem 2022053 - dpdk application with vhost-net is not able to start 2022114 - Console logging every proxy request 2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent) 2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long 2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2022509 - getOverrideForManifest does not check manifest.GVK.Group 2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard 2022627 - Machine object not picking up external FIP added to an openstack vm 2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:' 2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox 2022801 - Add Sprint 210 translations 2022811 - Fix kubelet log rotation file handle leak 2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations 2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2022880 - Pipeline renders with minor visual artifact with certain task dependencies 2022886 - Incorrect URL in operator description 2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config 2023060 - [e2e][automation] Windows VM with CDROM migration 2023077 - [e2e][automation] Home Overview Virtualization status 2023090 - [e2e][automation] Examples of Import URL for VM templates 2023102 - [e2e][automation] Cloudinit disk of VM from custom template 2023216 - ACL for a deleted egressfirewall still present on node join switch 2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9 2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy 2023342 - SCC admission should take ephemeralContainers into account 2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden) 2023434 - Update Azure Machine Spec API to accept Marketplace Images 2023500 - Latency experienced while waiting for volumes to attach to node 2023522 - can't remove package from index: database is locked 2023560 - "Network Attachment Definitions" has no project field on the top in the list view 2023592 - [e2e][automation] add mac spoof check for nad 2023604 - ACL violation when deleting a provisioning-configuration resource 2023607 - console returns blank page when normal user without any projects visit Installed Operators page 2023638 - Downgrade support level for extended control plane integration to Dev Preview 2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10 2023675 - Changing CNV Namespace 2023779 - Fix Patch 104847 in 4.9 2023781 - initial hardware devices is not loading in wizard 2023832 - CCO updates lastTransitionTime for non-Status changes 2023839 - Bump recommended FCOS to 34.20211031.3.0 2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly 2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository 2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8 2024055 - External DNS added extra prefix for the TXT record 2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully 2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json 2024199 - 400 Bad Request error for some queries for the non admin user 2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode 2024262 - Sample catalog is not displayed when one API call to the backend fails 2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability 2024316 - modal about support displays wrong annotation 2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected 2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page 2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view 2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined 2024515 - test-blocker: Ceph-storage-plugin tests failing 2024535 - hotplug disk missing OwnerReference 2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image 2024547 - Detail page is breaking for namespace store , backing store and bucket class. 2024551 - KMS resources not getting created for IBM FlashSystem storage 2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel 2024613 - pod-identity-webhook starts without tls 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded 2024665 - Bindable services are not shown on topology 2024731 - linuxptp container: unnecessary checking of interfaces 2024750 - i18n some remaining OLM items 2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured 2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack 2024841 - test Keycloak with latest tag 2024859 - Not able to deploy an existing image from private image registry using developer console 2024880 - Egress IP breaks when network policies are applied 2024900 - Operator upgrade kube-apiserver 2024932 - console throws "Unauthorized" error after logging out 2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up 2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick 2025230 - ClusterAutoscalerUnschedulablePods should not be a warning 2025266 - CreateResource route has exact prop which need to be removed 2025301 - [e2e][automation] VM actions availability in different VM states 2025304 - overwrite storage section of the DV spec instead of the pvc section 2025431 - [RFE]Provide specific windows source link 2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36 2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node 2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local 2025481 - Update VM Snapshots UI 2025488 - [DOCS] Update the doc for nmstate operator installation 2025592 - ODC 4.9 supports invalid devfiles only 2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings" 2025767 - VMs orphaned during machineset scaleup 2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard 2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin- 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2026560 - Cluster-version operator does not remove unrecognized volume mounts 2026699 - fixed a bug with missing metadata 2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator 2026898 - Description/details are missing for Local Storage Operator 2027132 - Use the specific icon for Fedora and CentOS template 2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend 2027272 - KubeMemoryOvercommit alert should be human readable 2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group 2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue) 2027299 - The status of checkbox component is not revealed correctly in code 2027311 - K8s watch hooks do not work when fetching core resources 2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation 2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images 2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation 2027498 - [IBMCloud] SG Name character length limitation 2027501 - [4.10] Bootimage bump tracker 2027524 - Delete Application doesn't delete Channels or Brokers 2027563 - e2e/add-flow-ci.feature fix accessibility violations 2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges 2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions 2027685 - openshift-cluster-csi-drivers pods crashing on PSI 2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced 2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string 2027917 - No settings in hostfirmwaresettings and schema objects for masters 2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf 2027982 - nncp stucked at ConfigurationProgressing 2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters 2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed 2028030 - Panic detected in cluster-image-registry-operator pod 2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found" 2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9 2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin 2028141 - Console tests doesn't pass on Node.js 15 and 16 2028160 - Remove i18nKey in network-policy-peer-selectors.tsx 2028162 - Add Sprint 210 translations 2028170 - Remove leading and trailing whitespace 2028174 - Add Sprint 210 part 2 translations 2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it 2028217 - Cluster-version operator does not default Deployment replicas to one 2028240 - Multiple CatalogSources causing higher CPU use than necessary 2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings 2028325 - disableDrain should be set automatically on SNO 2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel 2028531 - Missing netFilter to the list of parameters when platform is OpenStack 2028610 - Installer doesn't retry on GCP rate limiting 2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting 2028695 - destroy cluster does not prune bootstrap instance profile 2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs 2028802 - CRI-O panic due to invalid memory address or nil pointer dereference 2028816 - VLAN IDs not released on failures 2028881 - Override not working for the PerformanceProfile template 2028885 - Console should show an error context if it logs an error object 2028949 - Masthead dropdown item hover text color is incorrect 2028963 - Whereabouts should reconcile stranded IP addresses 2029034 - enabling ExternalCloudProvider leads to inoperative cluster 2029178 - Create VM with wizard - page is not displayed 2029181 - Missing CR from PGT 2029273 - wizard is not able to use if project field is "All Projects" 2029369 - Cypress tests github rate limit errors 2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out 2029394 - missing empty text for hardware devices at wizard review 2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used 2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl 2029521 - EFS CSI driver cannot delete volumes under load 2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle 2029579 - Clicking on an Application which has a Helm Release in it causes an error 2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE 2029645 - Sync upstream 1.15.0 downstream 2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing 2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip 2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage 2029785 - CVO panic when an edge is included in both edges and conditionaledges 2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp) 2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error 2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2030228 - Fix StorageSpec resources field to use correct API 2030229 - Mirroring status card reflect wrong data 2030240 - Hide overview page for non-privileged user 2030305 - Export App job do not completes 2030347 - kube-state-metrics exposes metrics about resource annotations 2030364 - Shared resource CSI driver monitoring is not setup correctly 2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets 2030534 - Node selector/tolerations rules are evaluated too early 2030539 - Prometheus is not highly available 2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing 2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation 2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates. 2030677 - BOND CNI: There is no option to configure MTU on a Bond interface 2030692 - NPE in PipelineJobListener.upsertWorkflowJob 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2030847 - PerformanceProfile API version should be v2 2030961 - Customizing the OAuth server URL does not apply to upgraded cluster 2031006 - Application name input field is not autofocused when user selects "Create application" 2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex 2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue 2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip 2031060 - Failing CSR Unit test due to expired test certificate 2031085 - ovs-vswitchd running more threads than expected 2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1 2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2031502 - [RFE] New common templates crash the ui 2031685 - Duplicated forward upstreams should be removed from the dns operator 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive 2031797 - [RFE] Order and text of Boot source type input are wrong 2031826 - CI tests needed to confirm driver-toolkit image contents 2031831 - OCP Console - Global CSS overrides affecting dynamic plugins 2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional 2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled) 2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain) 2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself 2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource 2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64 2032141 - open the alertrule link in new tab, got empty page 2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy 2032296 - Cannot create machine with ephemeral disk on Azure 2032407 - UI will show the default openshift template wizard for HANA template 2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded 2032421 - [RFE] UI integration with automatic updated images 2032516 - Not able to import git repo with .devfile.yaml 2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource 2032547 - hardware devices table have filter when table is empty 2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool 2032566 - Cluster-ingress-router does not support Azure Stack 2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso 2032589 - DeploymentConfigs ignore resolve-names annotation 2032732 - Fix styling conflicts due to recent console-wide CSS changes 2032831 - Knative Services and Revisions are not shown when Service has no ownerReference 2032851 - Networking is "not available" in Virtualization Overview 2032926 - Machine API components should use K8s 1.23 dependencies 2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24 2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster 2033013 - Project dropdown in user preferences page is broken 2033044 - Unable to change import strategy if devfile is invalid 2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable 2033111 - IBM VPC operator library bump removed global CLI args 2033138 - "No model registered for Templates" shows on customize wizard 2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected 2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected 2033257 - unable to use configmap for helm charts 2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered 2033290 - Product builds for console are failing 2033382 - MAPO is missing machine annotations 2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations 2033403 - Devfile catalog does not show provider information 2033404 - Cloud event schema is missing source type and resource field is using wrong value 2033407 - Secure route data is not pre-filled in edit flow form 2033422 - CNO not allowing LGW conversion from SGW in runtime 2033434 - Offer darwin/arm64 oc in clidownloads 2033489 - CCM operator failing on baremetal platform 2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver 2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains 2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady 2033538 - Gather Cost Management Metrics Custom Resource 2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined 2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page 2033634 - list-style-type: disc is applied to the modal dropdowns 2033720 - Update samples in 4.10 2033728 - Bump OVS to 2.16.0-33 2033729 - remove runtime request timeout restriction for azure 2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended 2033749 - Azure Stack Terraform fails without Local Provider 2033750 - Local volume should pull multi-arch image for kube-rbac-proxy 2033751 - Bump kubernetes to 1.23 2033752 - make verify fails due to missing yaml-patch 2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource 2034004 - [e2e][automation] add tests for VM snapshot improvements 2034068 - [e2e][automation] Enhance tests for 4.10 downstream 2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore 2034097 - [OVN] After edit EgressIP object, the status is not correct 2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning 2034129 - blank page returned when clicking 'Get started' button 2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0 2034153 - CNO does not verify MTU migration for OpenShiftSDN 2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled 2034170 - Use function.knative.dev for Knative Functions related labels 2034190 - unable to add new VirtIO disks to VMs 2034192 - Prometheus fails to insert reporting metrics when the sample limit is met 2034243 - regular user cant load template list 2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version" 2034248 - GPU/Host device modal is too small 2034257 - regular user Create VM missing permissions alert 2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere 2034300 - Du validator policy is NonCompliant after DU configuration completed 2034319 - Negation constraint is not validating packages 2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology 2034350 - The CNO should implement the Whereabouts IP reconciliation cron job 2034362 - update description of disk interface 2034398 - The Whereabouts IPPools CRD should include the podref field 2034409 - Default CatalogSources should be pointing to 4.10 index images 2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics 2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode 2034460 - Summary: cloud-network-config-controller does not account for different environment 2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true 2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly 2034493 - Change cluster version operator log level 2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list 2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6 2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer 2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART 2034537 - Update team 2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds 2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success 2034577 - Current OVN gateway mode should be reflected on node annotation as well 2034621 - context menu not popping up for application group 2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10 2034624 - Warn about unsupported CSI driver in vsphere operator 2034647 - missing volumes list in snapshot modal 2034648 - Rebase openshift-controller-manager to 1.23 2034650 - Rebase openshift/builder to 1.23 2034705 - vSphere: storage e2e tests logging configuration data 2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. 2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment 2034785 - ptpconfig with summary_interval cannot be applied 2034823 - RHEL9 should be starred in template list 2034838 - An external router can inject routes if no service is added 2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent 2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty 2034881 - Cloud providers components should use K8s 1.23 dependencies 2034884 - ART cannot build the image because it tries to download controller-gen 2034889 - oc adm prune deployments does not work 2034898 - Regression in recently added Events feature 2034957 - update openshift-apiserver to kube 1.23.1 2035015 - ClusterLogForwarding CR remains stuck remediating forever 2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster 2035141 - [RFE] Show GPU/Host devices in template's details tab 2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC 2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting 2035199 - IPv6 support in mtu-migration-dispatcher.yaml 2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing 2035250 - Peering with ebgp peer over multi-hops doesn't work 2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices 2035315 - invalid test cases for AWS passthrough mode 2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env 2035321 - Add Sprint 211 translations 2035326 - [ExternalCloudProvider] installation with additional network on workers fails 2035328 - Ccoctl does not ignore credentials request manifest marked for deletion 2035333 - Kuryr orphans ports on 504 errors from Neutron 2035348 - Fix two grammar issues in kubevirt-plugin.json strings 2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets 2035409 - OLM E2E test depends on operator package that's no longer published 2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address 2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1' 2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster 2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page 2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers 2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class 2035602 - [e2e][automation] add tests for Virtualization Overview page cards 2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly 2035704 - RoleBindings list page filter doesn't apply 2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing. 2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed 2035772 - AccessMode and VolumeMode is not reserved for customize wizard 2035847 - Two dashes in the Cronjob / Job pod name 2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml 2035882 - [BIOS setting values] Create events for all invalid settings in spec 2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests” 2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen 2035927 - Cannot enable HighNodeUtilization scheduler profile 2035933 - volume mode and access mode are empty in customize wizard review tab 2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods 2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation 2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error 2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential 2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend 2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes 2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23 2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23 2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments 2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists 2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected 2036826 - oc adm prune deployments can prune the RC/RS 2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform 2036861 - kube-apiserver is degraded while enable multitenant 2036937 - Command line tools page shows wrong download ODO link 2036940 - oc registry login fails if the file is empty or stdout 2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container 2036989 - Route URL copy to clipboard button wraps to a separate line by itself 2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters 2036993 - Machine API components should use Go lang version 1.17 2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. 2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api 2037073 - Alertmanager container fails to start because of startup probe never being successful 2037075 - Builds do not support CSI volumes 2037167 - Some log level in ibm-vpc-block-csi-controller are hard code 2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles 2037182 - PingSource badge color is not matched with knativeEventing color 2037203 - "Running VMs" card is too small in Virtualization Overview 2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly 2037237 - Add "This is a CD-ROM boot source" to customize wizard 2037241 - default TTL for noobaa cache buckets should be 0 2037246 - Cannot customize auto-update boot source 2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately 2037288 - Remove stale image reference 2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources 2037483 - Rbacs for Pods within the CBO should be more restrictive 2037484 - Bump dependencies to k8s 1.23 2037554 - Mismatched wave number error message should include the wave numbers that are in conflict 2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform] 2037635 - impossible to configure custom certs for default console route in ingress config 2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8 2037638 - Builds do not support CSI volumes as volume sources 2037664 - text formatting issue in Installed Operators list table 2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037801 - Serverless installation is failing on CI jobs for e2e tests 2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format 2037856 - use lease for leader election 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10 2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests 2037904 - upgrade operator deployment failed due to memory limit too low for manager container 2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation] 2038034 - non-privileged user cannot see auto-update boot source 2038053 - Bump dependencies to k8s 1.23 2038088 - Remove ipa-downloader references 2038160 - The default project missed the annotation : openshift.io/node-selector: "" 2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional 2038196 - must-gather is missing collecting some metal3 resources 2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777) 2038253 - Validator Policies are long lived 2038272 - Failures to build a PreprovisioningImage are not reported 2038384 - Azure Default Instance Types are Incorrect 2038389 - Failing test: [sig-arch] events should not repeat pathologically 2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket 2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips 2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained 2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect 2038663 - update kubevirt-plugin OWNERS 2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new" 2038705 - Update ptp reviewers 2038761 - Open Observe->Targets page, wait for a while, page become blank 2038768 - All the filters on the Observe->Targets page can't work 2038772 - Some monitors failed to display on Observe->Targets page 2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node 2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard 2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation 2038864 - E2E tests fail because multi-hop-net was not created 2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console 2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured 2038968 - Move feature gates from a carry patch to openshift/api 2039056 - Layout issue with breadcrumbs on API explorer page 2039057 - Kind column is not wide enough in API explorer page 2039064 - Bulk Import e2e test flaking at a high rate 2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled 2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters 2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost 2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy 2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator 2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade 2039227 - Improve image customization server parameter passing during installation 2039241 - Improve image customization server parameter passing during installation 2039244 - Helm Release revision history page crashes the UI 2039294 - SDN controller metrics cannot be consumed correctly by prometheus 2039311 - oc Does Not Describe Build CSI Volumes 2039315 - Helm release list page should only fetch secrets for deployed charts 2039321 - SDN controller metrics are not being consumed by prometheus 2039330 - Create NMState button doesn't work in OperatorHub web console 2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations 2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead 2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced 2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message 2040150 - Update ConfigMap keys for IBM HPCS 2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth 2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository 2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp 2040376 - "unknown instance type" error for supported m6i.xlarge instance 2040394 - Controller: enqueue the failed configmap till services update 2040467 - Cannot build ztp-site-generator container image 2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4 2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps 2040535 - Auto-update boot source is not available in customize wizard 2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name 2040603 - rhel worker scaleup playbook failed because missing some dependency of podman 2040616 - rolebindings page doesn't load for normal users 2040620 - [MAPO] Error pulling MAPO image on installation 2040653 - Topology sidebar warns that another component is updated while rendering 2040655 - User settings update fails when selecting application in topology sidebar 2040661 - Different react warnings about updating state on unmounted components when leaving topology 2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation 2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi 2040694 - Three upstream HTTPClientConfig struct fields missing in the operator 2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers 2040710 - cluster-baremetal-operator cannot update BMC subscription CR 2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms 2040782 - Import YAML page blocks input with more then one generateName attribute 2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute 2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator 2040793 - Fix snapshot e2e failures 2040880 - do not block upgrades if we can't connect to vcenter 2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10 2041093 - autounattend.xml missing 2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates 2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped" 2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23 2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller 2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener 2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud 2041466 - Kubedescheduler version is missing from the operator logs 2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses 2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods) 2041492 - Spacing between resources in inventory card is too small 2041509 - GCP Cloud provider components should use K8s 1.23 dependencies 2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook 2041541 - audit: ManagedFields are dropped using API not annotation 2041546 - ovnkube: set election timer at RAFT cluster creation time 2041554 - use lease for leader election 2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected" 2041583 - etcd and api server cpu mask interferes with a guaranteed workload 2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure 2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation 2041620 - bundle CSV alm-examples does not parse 2041641 - Fix inotify leak and kubelet retaining memory 2041671 - Delete templates leads to 404 page 2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category 2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled 2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint 2041763 - The Observe > Alerting pages no longer have their default sort order applied 2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken 2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied 2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster 2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases 2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist 2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen 2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile 2041999 - [PROXY] external dns pod cannot recognize custom proxy CA 2042001 - unexpectedly found multiple load balancers 2042029 - kubedescheduler fails to install completely 2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters 2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs 2042059 - update discovery burst to reflect lots of CRDs on openshift clusters 2042069 - Revert toolbox to rhcos-toolbox 2042169 - Can not delete egressnetworkpolicy in Foreground propagation 2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud 2042274 - Storage API should be used when creating a PVC 2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection 2042366 - Lifecycle hooks should be independently managed 2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway 2042382 - [e2e][automation] CI takes more then 2 hours to run 2042395 - Add prerequisites for active health checks test 2042438 - Missing rpms in openstack-installer image 2042466 - Selection does not happen when switching from Topology Graph to List View 2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver 2042567 - insufficient info on CodeReady Containers configuration 2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk 2042619 - Overview page of the console is broken for hypershift clusters 2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running 2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud 2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud 2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly 2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring) 2042851 - Create template from SAP HANA template flow - VM is created instead of a new template 2042906 - Edit machineset with same machine deletion hook name succeed 2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal" 2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways' 2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2043043 - Cluster Autoscaler should use K8s 1.23 dependencies 2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props) 2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects". 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2044201 - Templates golden image parameters names should be supported 2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8] 2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied" 2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects 2044347 - Bump to kubernetes 1.23.3 2044481 - collect sharedresource cluster scoped instances with must-gather 2044496 - Unable to create hardware events subscription - failed to add finalizers 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2044680 - Additional libovsdb performance and resource consumption fixes 2044704 - Observe > Alerting pages should not show runbook links in 4.10 2044717 - [e2e] improve tests for upstream test environment 2044724 - Remove namespace column on VM list page when a project is selected 2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff 2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD 2045024 - CustomNoUpgrade alerts should be ignored 2045112 - vsphere-problem-detector has missing rbac rules for leases 2045199 - SnapShot with Disk Hot-plug hangs 2045561 - Cluster Autoscaler should use the same default Group value as Cluster API 2045591 - Reconciliation of aws pod identity mutating webhook did not happen 2045849 - Add Sprint 212 translations 2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10 2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin 2045916 - [IBMCloud] Default machine profile in installer is unreliable 2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment 2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify 2046137 - oc output for unknown commands is not human readable 2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance 2046297 - Bump DB reconnect timeout 2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations 2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors 2046626 - Allow setting custom metrics for Ansible-based Operators 2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud 2047025 - Installation fails because of Alibaba CSI driver operator is degraded 2047190 - Bump Alibaba CSI driver for 4.10 2047238 - When using communities and localpreferences together, only localpreference gets applied 2047255 - alibaba: resourceGroupID not found 2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions 2047317 - Update HELM OWNERS files under Dev Console 2047455 - [IBM Cloud] Update custom image os type 2047496 - Add image digest feature 2047779 - do not degrade cluster if storagepolicy creation fails 2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047929 - use lease for leader election 2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2048048 - Application tab in User Preferences dropdown menus are too wide. 2048050 - Topology list view items are not highlighted on keyboard navigation 2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2048413 - Bond CNI: Failed to attach Bond NAD to pod 2048443 - Image registry operator panics when finalizes config deletion 2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-* 2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2048598 - Web terminal view is broken 2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2048891 - Topology page is crashed 2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2049043 - Cannot create VM from template 2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2049886 - Placeholder bug for OCP 4.10.0 metadata release 2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members 2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2050370 - alert data for burn budget needs to be updated to prevent regression 2050393 - ZTP missing support for local image registry and custom machine config 2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2050737 - Remove metrics and events for master port offsets 2050801 - Vsphere upi tries to access vsphere during manifests generation phase 2050883 - Logger object in LSO does not log source location accurately 2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 2052062 - Whereabouts should implement client-go 1.22+ 2052125 - [4.10] Crio appears to be coredumping in some scenarios 2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch 2052756 - [4.10] PVs are not being cleaned up after PVC deletion 2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2053268 - inability to detect static lifecycle failure 2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053323 - OpenShift-Ansible BYOH Unit Tests are Broken 2053339 - Remove dev preview badge from IBM FlashSystem deployment windows 2053751 - ztp-site-generate container is missing convenience entrypoint 2053945 - [4.10] Failed to apply sriov policy on intel nics 2054109 - Missing "app" label 2054154 - RoleBinding in project without subject is causing "Project access" page to fail 2054244 - Latest pipeline run should be listed on the top of the pipeline run list 2054288 - console-master-e2e-gcp-console is broken 2054562 - DPU network operator 4.10 branch need to sync with master 2054897 - Unable to deploy hw-event-proxy operator 2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2055371 - Remove Check which enforces summary_interval must match logSyncInterval 2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API 2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2056479 - ovirt-csi-driver-node pods are crashing intermittently 2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope" 2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes" 2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2056948 - post 1.23 rebase: regression in service-load balancer reliability 2057438 - Service Level Agreement (SLA) always show 'Unknown' 2057721 - Fix Proxy support in RHACM 2.4.2 2057724 - Image creation fails when NMstateConfig CR is empty 2058641 - [4.10] Pod density test causing problems when using kube-burner 2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc

  1. References:

https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202010-1294",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.8"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.20"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.4.8"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.0"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "6.2.8"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.3"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.6"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.6"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.2 \u672a\u6e80 (macos high sierra)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "7.20 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (iphone 6s \u4ee5\u964d)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (ipad mini 4 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.2 \u672a\u6e80 (macos mojave)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "11.3 \u672a\u6e80 (microsoft store \u304b\u3089\u5165\u624b\u3057\u305f windows 10 \u4ee5\u964d)"
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.2.8 \u672a\u6e80 (apple watch series 1 \u4ee5\u964d)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (ipod touch \u7b2c 7 \u4e16\u4ee3)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4.8 \u672a\u6e80 (apple tv hd)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (ipad air 2 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4.8 \u672a\u6e80 (apple tv 4k)"
      },
      {
        "model": "itunes",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 12.10.8 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.2 \u672a\u6e80 (macos catalina)"
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.7,
        "vendor": "apple",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:watchos",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "0011",
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2020-9893",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-9893",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Medium",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 6.8,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-009938",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-188018",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2020-9893",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-009938",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          },
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "ZDI",
            "availabilityImpact": "HIGH",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.6,
            "id": "CVE-2020-9893",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 0.7,
            "userInteraction": "REQUIRED",
            "vectorString": "AV:N/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-9893",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-009938",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "ZDI",
            "id": "CVE-2020-9893",
            "trust": 0.7,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202007-1150",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-188018",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-9893",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A use after free issue was addressed with improved memory management. This issue is fixed in iOS 13.6 and iPadOS 13.6, tvOS 13.4.8, watchOS 6.2.8, Safari 13.1.2, iTunes 12.10.8 for Windows, iCloud for Windows 11.3, iCloud for Windows 7.20. A remote attacker may be able to cause unexpected application termination or arbitrary code execution. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file.The specific flaw exists within the RenderWidget class. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker can leverage this vulnerability to execute code in the context of the current process. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple iOS prior to 13.6; iPadOS prior to 13.6; tvOS prior to 13.4.8; watchOS prior to 6.2.8; Safari prior to 13.1.2; Windows-based iTunes prior to 12.10.8. \n\nCVE-2020-9915\n\n    Ayoub Ait Elmokhtar discovered that processing maliciously crafted\n    web content may prevent Content Security Policy from being\n    enforced. \n\nCVE-2020-9925\n\n    An anonymous researcher discovered that processing maliciously\n    crafted web content may lead to universal cross site scripting. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 2.28.4-1~deb10u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nFor the detailed security status of webkit2gtk please refer to its\nsecurity tracker page at:\nhttps://security-tracker.debian.org/tracker/webkit2gtk\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAl8oKD5fFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0SagQ//ZltrMBv+gUHbNMk4foYl7hv9PmHRd0U+KR4sdAPvhF+UWYSTJbAbwH1b\nPKTByJixoZfCntg/KMd9f16EHX13TbBw5M4rnZ7/4oM5AZfUifGwEwAOH5jK1/IS\n1p/CnGlfTc3i5tZhI0xKkSLruSSVswzuikRUQ/4DaC/tRUbzGs67U2iuQ8Z4e8A3\nvIe6/P+y6svAsbaSbCBKi72IKzLzkqraBUVXjSgs17xnzkuaqeCBFBQRYDII6gm1\ncY0mZd5MknDjc3BrNBhpGA0VJ3SI+6RhM7k7oUKcCCoL2Q6c/PPipSqcLZYita+u\ncsmvikeusNMOv8Z6JKwvvbjWv6A199x8ddZgjIuQIIWJA4xHbOqJCyzwn9YBEdpS\n7DG/VGFRAJW2O7FHBTE04wSjOwxHuRcXra6Yc9Ty80BaWuPOJ/FlfD2X+ojub+i5\nL6FHBhR0eQ2e7zW5xd6GnF/WbNYHF09K2qblCRw/DdyL2TLAtVS02JAriUUK0Bwl\n3HAtF8a1EjIudabf/uENDq+ZHzd5zbl6maSOKH9e8ajaFFk4wAcRsHbs13v4nnqK\ncYuD+n7WHv3uU6VTfRh2nLkyIbR3udoDq1MpJsatuWHNRwnYVI82c/EjsBDkznG6\nyYy3d+RnnexDgF4rZ7XuOFK52JEpr5whTGg/Nn7wVw8YQUXXGO0=\n=ANGc\n-----END PGP SIGNATURE-----\n. Summary:\n\nAn update is now available for Service Telemetry Framework 1.4 for RHEL 8. Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. \n1865938 - CSIDrivers missing in OCS 4.6\n1867024 - [ocs-operator] operator v4.6.0-519.ci is in Installing state\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868060 - [External Cluster] Noobaa-default-backingstore PV in released state upon OCS 4.5 uninstall (Secret not found)\n1868703 - [rbd] After volume expansion, the new size is not reflected on the pod\n1869411 - capture full crash information from ceph\n1870061 - [RHEL][IBM] OCS un-install should make the devices raw\n1870338 - OCS 4.6 must-gather : ocs-must-gather-xxx-helper pod in ContainerCreationError (couldn\u0027t find key admin-secret)\n1870631 - OCS 4.6 Deployment : RGW pods went into \u0027CrashLoopBackOff\u0027 state on Z Platform\n1872119 - Updates don\u0027t work on StorageClass which will keep PV expansion disabled for upgraded cluster\n1872696 - [ROKS][RFE]NooBaa Configure IBM COS as default backing store\n1873864 - Noobaa: On an baremetal RHCOS cluster, some backingstores are stuck in PROGRESSING state with INVALID_ENDPOINT TemporaryError\n1874606 - CVE-2020-7720 nodejs-node-forge: prototype pollution via the util.setPath function\n1875476 - Change noobaa logo in the noobaa UI\n1877339 - Incorrect use of logr\n1877371 - NooBaa UI warning message on Deploy Kubernetes Pool process - typo and shown number is incorrect\n1878153 - OCS 4.6 must-gather: collect node information under cluster_scoped_resources/oc_output directory\n1878714 - [FIPS enabled] BadDigest error on file upload to noobaa bucket\n1878853 - [External Mode] ceph-external-cluster-details-exporter.py  does not tolerate TLS enabled RGW\n1879008 - ocs-osd-removal job fails because it can\u0027t find admin-secret in rook-ceph-mon secret\n1879072 - Deployment with encryption at rest is failing to bring up OSD pods\n1879919 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed\n1880255 - Collect rbd info and subvolume info and snapshot info command output\n1881028 - CVE-2020-8237 nodejs-json-bigint: Prototype pollution via `__proto__` assignment could result in DoS\n1881071 - [External] Upgrade mechanism from OCS 4.5 to OCS 4.6 needs to be fixed\n1882397 - MCG decompression problem with snappy on s390x arch\n1883253 - CSV doesn\u0027t contain values required for UI to enable minimal deployment and cluster encryption\n1883398 - Update csi sidecar containers in rook\n1883767 - Using placement strategies in cluster-service.yaml causes ocs-operator to crash\n1883810 - [External mode]  RGW metrics is not available after OCS upgrade from 4.5 to 4.6\n1883927 - Deployment with encryption at rest is failing to bring up OSD pods\n1885175 - Handle disappeared underlying device for encrypted OSD\n1885428 - panic seen in rook-ceph during uninstall - \"close of closed channel\"\n1885648 - [Tracker for https://bugzilla.redhat.com/show_bug.cgi?id=1885700] FSTYPE for localvolumeset devices shows up as ext2 after uninstall\n1885971 - ocs-storagecluster-cephobjectstore doesn\u0027t report true state of RGW\n1886308 - Default VolumeSnapshot Classes not created in External Mode\n1886348 - osd removal job failed with status \"Error\"\n1886551 - Clone creation failed after timeout of 5 hours of Azure platrom for 3 CephFS PVCs ( PVC sizes: 1, 25 and 100 GB)\n1886709 - [External] RGW storageclass disappears after upgrade from OCS 4.5 to 4.6\n1886859 - OCS 4.6: Uninstall stuck indefinitely if any Ceph pods are in Pending state before uninstall\n1886873 - [OCS 4.6 External/Internal Uninstall] - Storage Cluster deletion stuck indefinitely, \"failed to delete object store\", remaining users: [noobaa-ceph-objectstore-user]\n1888583 - [External] When deployment is attempted without specifying the monitoring-endpoint while generating JSON, the CSV is stuck in installing state\n1888593 - [External] Add validation for monitoring-endpoint and port in the exporter script\n1888614 - [External] Unreachable monitoring-endpoint used during deployment causes ocs-operator to crash\n1889441 - Traceback error message while running OCS 4.6 must-gather\n1889683 - [GSS] Noobaa Problem when setting public access to a bucket\n1889866 - Post node power off/on, an unused MON PVC still stays back in the cluster\n1890183 - [External] ocs-operator logs are filled with \"failed to reconcile metrics exporter\"\n1890638 - must-gather helper pod should be deleted after collecting ceph crash info\n1890971 - [External] RGW metrics are not available if anything else except 9283 is provided as the monitoring-endpoint-port\n1891856 - ocs-metrics-exporter pod should have tolerations for OCS taint\n1892206 - [GSS] Ceph image/version mismatch\n1892234 - clone #95 creation failed for CephFS PVC ( 10 GB PVC size) during multiple clones creation test\n1893624 - Must Gather is not collecting the tar file from NooBaa diagnose\n1893691 - OCS4.6 must_gather failes to complete in 600sec\n1893714 - Bad response for upload an object with encryption\n1895402 - Mon pods didn\u0027t get upgraded in 720 second timeout from OCS 4.5 upgrade to 4.6\n1896298 - [RFE] Monitoring for Namespace buckets and resources\n1896831 - Clone#452 for RBD PVC ( PVC size 1 GB) failed to be created for 600 secs\n1898521 - [CephFS] Deleting cephfsplugin pod along with app pods will make PV remain in Released state after deleting the PVC\n1902627 - must-gather should wait for debug pods to be in ready state\n1904171 - RGW Service is unavailable for a short period during upgrade to OCS 4.6\n\n5. Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID:       RHSA-2022:0056-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0056\nIssue date:        2022-03-10\nCVE Names:         CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n                   CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n                   CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n                   CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n                   CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n                   CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n                   CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n                   CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n                   CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n                   CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n                   CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n                   CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n                   CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n                   CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n                   CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n                   CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n                   CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n                   CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n                   CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n                   CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n                   CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n                   CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n                   CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027  is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027  is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\"  in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027  is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi  after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal]  The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs -  fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log  of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI  OSP16 OVN  IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027  ENV  which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI  start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation]  add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws  bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer:  ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on  self._get_vip_port(loadbalancer).id   \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10]  sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack  (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure -  Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image  Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list  and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the  deleted EgressIP object got  InvalidEgressIP  warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD:  prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) -  no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server  makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN  Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain  olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking -  networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR  is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to  attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      }
    ],
    "trust": 3.06
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-9893",
        "trust": 4.0
      },
      {
        "db": "ZDI",
        "id": "ZDI-20-907",
        "trust": 1.3
      },
      {
        "db": "JVN",
        "id": "JVNVU95491800",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94090210",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938",
        "trust": 0.8
      },
      {
        "db": "ZDI_CAN",
        "id": "ZDI-CAN-10111",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "158688",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2434",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2659",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2812",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2785",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "158466",
        "trust": 0.6
      },
      {
        "db": "NSFOCUS",
        "id": "50223",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51501",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-188018",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168885",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "id": "VAR-202010-1294",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:07:49.088000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT211292",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211292"
      },
      {
        "title": "HT211293",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211293"
      },
      {
        "title": "HT211294",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211294"
      },
      {
        "title": "HT211295",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211295"
      },
      {
        "title": "HT211288",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211288"
      },
      {
        "title": "HT211290",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211290"
      },
      {
        "title": "HT211291",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211291"
      },
      {
        "title": "HT211293",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211293"
      },
      {
        "title": "HT211294",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211294"
      },
      {
        "title": "HT211295",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211295"
      },
      {
        "title": "HT211288",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211288"
      },
      {
        "title": "HT211290",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211290"
      },
      {
        "title": "HT211291",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211291"
      },
      {
        "title": "HT211292",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211292"
      },
      {
        "title": "Apple has issued an update to correct this vulnerability.",
        "trust": 0.7,
        "url": "https://support.apple.com/en-gb/HT211292"
      },
      {
        "title": "Multiple Apple product WebKit Fixes for component security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=124781"
      },
      {
        "title": "Debian Security Advisories: DSA-4739-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=1f5a43d66ba1bd3cae01970bb774e5f6"
      },
      {
        "title": "Apple: iCloud for Windows 7.20",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=50e6b35a047c9702f4cdebdf81483b05"
      },
      {
        "title": "Apple: iCloud for Windows 11.3",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=947a08401ec7e5f309d5ae26f5006f48"
      },
      {
        "title": "Apple: iOS 13.6 and iPadOS 13.6",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=a82d39d4c9a42fcf07757428b2f562b3"
      },
      {
        "title": null,
        "trust": 0.1,
        "url": "https://www.theregister.co.uk/2020/07/16/apple_july_updates/"
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-416",
        "trust": 1.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211288"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211290"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211291"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211292"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211293"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211294"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211295"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9893"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-9893"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu94090210/index.html"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu95491800/index.html"
      },
      {
        "trust": 0.7,
        "url": "https://support.apple.com/en-gb/ht211292"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2020/suse-su-20202232-1"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.zerodayinitiative.com/advisories/zdi-20-907/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "http://www.nsfocus.net/vulndb/50223"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2785/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211291"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2434/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-multiple-vulnerabilities-32994"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2659/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2812/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-32847"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211295"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211294"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211293"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211292"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158466/apple-security-advisory-2020-07-15-5.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158688/gentoo-linux-security-advisory-202007-61.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/416.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://exchange.xforce.ibmcloud.com/vulnerabilities/185386"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9915"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9894"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9895"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9925"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      }
    ],
    "sources": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-07-21T00:00:00",
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "date": "2020-10-16T00:00:00",
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "date": "2020-10-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "date": "2020-08-03T12:12:00",
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "date": "2022-08-09T14:36:05",
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-07-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "date": "2020-12-11T07:15:39",
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "date": "2020-10-16T17:15:16.293000",
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-07-21T00:00:00",
        "db": "ZDI",
        "id": "ZDI-20-907"
      },
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-188018"
      },
      {
        "date": "2020-10-20T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9893"
      },
      {
        "date": "2023-01-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      },
      {
        "date": "2020-12-11T07:15:39",
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      },
      {
        "date": "2024-11-21T05:41:28.970000",
        "db": "NVD",
        "id": "CVE-2020-9893"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      }
    ],
    "trust": 0.8
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Product free memory usage vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009938"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "resource management error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1150"
      }
    ],
    "trust": 0.6
  }
}

VAR-202010-1296

Vulnerability from variot - Updated: 2025-12-22 23:07

A use after free issue was addressed with improved memory management. This issue is fixed in iOS 13.6 and iPadOS 13.6, tvOS 13.4.8, watchOS 6.2.8, Safari 13.1.2, iTunes 12.10.8 for Windows, iCloud for Windows 11.3, iCloud for Windows 7.20. A remote attacker may be able to cause unexpected application termination or arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple iOS prior to 13.6; iPadOS prior to 13.6; tvOS prior to 13.4.8; watchOS prior to 6.2.8; Safari prior to 13.1.2; Windows-based iTunes prior to 12.10.8.

For the stable distribution (buster), these problems have been fixed in version 2.28.4-1~deb10u1.

We recommend that you upgrade your webkit2gtk packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202007-61


                                       https://security.gentoo.org/

Severity: Normal Title: WebKitGTK+: Multiple vulnerabilities Date: July 31, 2020 Bugs: #734584 ID: 202007-61


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which could result in the arbitrary execution of code.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.28.4 >= 2.28.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebKitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.28.4"

References

[ 1 ] CVE-2020-9862 https://nvd.nist.gov/vuln/detail/CVE-2020-9862 [ 2 ] CVE-2020-9893 https://nvd.nist.gov/vuln/detail/CVE-2020-9893 [ 3 ] CVE-2020-9894 https://nvd.nist.gov/vuln/detail/CVE-2020-9894 [ 4 ] CVE-2020-9895 https://nvd.nist.gov/vuln/detail/CVE-2020-9895 [ 5 ] CVE-2020-9915 https://nvd.nist.gov/vuln/detail/CVE-2020-9915 [ 6 ] CVE-2020-9925 https://nvd.nist.gov/vuln/detail/CVE-2020-9925

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202007-61

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 .

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:0055

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
  • grafana: Snapshot authentication bypass (CVE-2021-39226)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
  • grafana: directory traversal vulnerability (CVE-2021-43813)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64

The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x

The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le

The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform 1976894 - Unidling a StatefulSet does not work as expected 1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases 1977414 - Build Config timed out waiting for condition 400: Bad Request 1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus 1978528 - systemd-coredump started and failed intermittently for unknown reasons 1978581 - machine-config-operator: remove runlevel from mco namespace 1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable 1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9 1979966 - OCP builds always fail when run on RHEL7 nodes 1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading 1981549 - Machine-config daemon does not recover from broken Proxy configuration 1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel] 1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues 1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page 1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands 1982662 - Workloads - DaemonSets - Add storage: i18n misses 1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters 1983758 - upgrades are failing on disruptive tests 1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma" 1984592 - global pull secret not working in OCP4.7.4+ for additional private registries 1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs 1985486 - Cluster Proxy not used during installation on OSP with Kuryr 1985724 - VM Details Page missing translations 1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted 1985933 - Downstream image registry recommendation 1985965 - oVirt CSI driver does not report volume stats 1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding" 1986237 - "MachineNotYetDeleted" in Pending state , alert not fired 1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid" 1986302 - console continues to fetch prometheus alert and silences for normal user 1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI 1986338 - error creating list of resources in Import YAML 1986502 - yaml multi file dnd duplicates previous dragged files 1986819 - fix string typos for hot-plug disks 1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure 1987136 - Declare operatorframework.io/arch. labels for all operators 1987257 - Go-http-client user-agent being used for oc adm mirror requests 1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold 1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP 1988406 - SSH key dropped when selecting "Customize virtual machine" in UI 1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade 1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs 1989438 - expected replicas is wrong 1989502 - Developer Catalog is disappearing after short time 1989843 - 'More' and 'Show Less' functions are not translated on several page 1990014 - oc debug does not work for Windows pods 1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created 1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page 1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar 1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI 1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks 1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var 1990625 - Ironic agent registers with SLAAC address with privacy-stable 1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time 1991067 - github.com can not be resolved inside pods where cluster is running on openstack. 1991573 - Enable typescript strictNullCheck on network-policies files 1991641 - Baremetal Cluster Operator still Available After Delete Provisioning 1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator 1991819 - Misspelled word "ocurred" in oc inspect cmd 1991942 - Alignment and spacing fixes 1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked 1992453 - The configMap failed to save on VM environment tab 1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab 1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab 1992509 - Could not customize boot source due to source PVC not found 1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1992580 - storageProfile should stay with the same value by check/uncheck the apply button 1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply 1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios 1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed) 1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing 1994094 - Some hardcodes are detected at the code level in OpenShift console components 1994142 - Missing required cloud config fields for IBM Cloud 1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools 1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart 1995335 - [SCALE] ovnkube CNI: remove ovs flows check 1995493 - Add Secret to workload button and Actions button are not aligned on secret details page 1995531 - Create RDO-based Ironic image to be promoted to OKD 1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator 1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread 1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole 1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN 1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down 1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page 1996647 - Provide more useful degraded message in auth operator on DNS errors 1996736 - Large number of 501 lr-policies in INCI2 env 1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes 1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP 1996928 - Enable default operator indexes on ARM 1997028 - prometheus-operator update removes env var support for thanos-sidecar 1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used 1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. 1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI 1997269 - Have to refresh console to install kube-descheduler 1997478 - Storage operator is not available after reboot cluster instances 1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 1997967 - storageClass is not reserved from default wizard to customize wizard 1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order 1998038 - [e2e][automation] add tests for UI for VM disk hot-plug 1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus 1998174 - Create storageclass gp3-csi after install ocp cluster on aws 1998183 - "r: Bad Gateway" info is improper 1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected 1998377 - Filesystem table head is not full displayed in disk tab 1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory 1998519 - Add fstype when create localvolumeset instance on web console 1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses 1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page 1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable 1999091 - Console update toast notification can appear multiple times 1999133 - removing and recreating static pod manifest leaves pod in error state 1999246 - .indexignore is not ingore when oc command load dc configuration 1999250 - ArgoCD in GitOps operator can't manage namespaces 1999255 - ovnkube-node always crashes out the first time it starts 1999261 - ovnkube-node log spam (and security token leak?) 1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page 1999314 - console-operator is slow to mark Degraded as False once console starts working 1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck) 1999556 - "master" pool should be updated before the CVO reports available at the new version occurred 1999578 - AWS EFS CSI tests are constantly failing 1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages 1999619 - cloudinit is malformatted if a user sets a password during VM creation flow 1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow 1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined 1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) 1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource 1999771 - revert "force cert rotation every couple days for development" in 4.10 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. 1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions 1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form 1999983 - No way to clear upload error from template boot source 2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter 2000096 - Git URL is not re-validated on edit build-config form reload 2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig 2000236 - Confusing usage message from dynkeepalived CLI 2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported 2000430 - bump cluster-api-provider-ovirt version in installer 2000450 - 4.10: Enable static PV multi-az test 2000490 - All critical alerts shipped by CMO should have links to a runbook 2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded) 2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster 2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled 2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console 2000754 - IPerf2 tests should be lower 2000846 - Structure logs in the entire codebase of Local Storage Operator 2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24 2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2000938 - CVO does not respect changes to a Deployment strategy 2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy 2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone 2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole 2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error 2001337 - Details Card in ODF Dashboard mentions OCS 2001339 - fix text content hotplug 2001413 - [e2e][automation] add/delete nic and disk to template 2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log 2001442 - Empty termination.log file for the kube-apiserver has too permissive mode 2001479 - IBM Cloud DNS unable to create/update records 2001566 - Enable alerts for prometheus operator in UWM 2001575 - Clicking on the perspective switcher shows a white page with loader 2001577 - Quick search placeholder is not displayed properly when the search string is removed 2001578 - [e2e][automation] add tests for vm dashboard tab 2001605 - PVs remain in Released state for a long time after the claim is deleted 2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options 2001620 - Cluster becomes degraded if it can't talk to Manila 2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF 2001761 - Unable to apply cluster operator storage for SNO on GCP platform. 2001765 - Some error message in the log of diskmaker-manager caused confusion 2001784 - show loading page before final results instead of showing a transient message No log files exist 2001804 - Reload feature on Environment section in Build Config form does not work properly 2001810 - cluster admin unable to view BuildConfigs in all namespaces 2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well 2001823 - OCM controller must update operator status 2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start 2001835 - Could not select image tag version when create app from dev console 2001855 - Add capacity is disabled for ocs-storagecluster 2001856 - Repeating event: MissingVersion no image found for operand pod 2001959 - Side nav list borders don't extend to edges of container 2002007 - Layout issue on "Something went wrong" page 2002010 - ovn-kube may never attempt to retry a pod creation 2002012 - Cannot change volume mode when cloning a VM from a template 2002027 - Two instances of Dotnet helm chart show as one in topology 2002075 - opm render does not automatically pulling in the image(s) used in the deployments 2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster 2002125 - Network policy details page heading should be updated to Network Policy details 2002133 - [e2e][automation] add support/virtualization and improve deleteResource 2002134 - [e2e][automation] add test to verify vm details tab 2002215 - Multipath day1 not working on s390x 2002238 - Image stream tag is not persisted when switching from yaml to form editor 2002262 - [vSphere] Incorrect user agent in vCenter sessions list 2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector 2002276 - OLM fails to upgrade operators immediately 2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods 2002354 - Missing DU configuration "Done" status reporting during ZTP flow 2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs 2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation 2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN 2002397 - Resources search is inconsistent 2002434 - CRI-O leaks some children PIDs 2002443 - Getting undefined error on create local volume set page 2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy 2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods. 2002559 - User preference for topology list view does not follow when a new namespace is created 2002567 - Upstream SR-IOV worker doc has broken links 2002588 - Change text to be sentence case to align with PF 2002657 - ovn-kube egress IP monitoring is using a random port over the node network 2002713 - CNO: OVN logs should have millisecond resolution 2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event 2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite 2002763 - Two storage systems getting created with external mode RHCS 2002808 - KCM does not use web identity credentials 2002834 - Cluster-version operator does not remove unrecognized volume mounts 2002896 - Incorrect result return when user filter data by name on search page 2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- " 2003096 - [e2e][automation] check bootsource URL is displaying on review step 2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role 2003120 - CI: Uncaught error with ResizeObserver on operand details page 2003145 - Duplicate operand tab titles causes "two children with the same key" warning 2003164 - OLM, fatal error: concurrent map writes 2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form 2003193 - Kubelet/crio leaks netns and veth ports in the host 2003195 - OVN CNI should ensure host veths are removed 2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images 2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI 2003244 - Revert libovsdb client code 2003251 - Patternfly components with list element has list item bullet when they should not. 2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI 2003269 - Rejected pods should be filtered from admission regression 2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release 2003426 - [e2e][automation] add test for vm details bootorder 2003496 - [e2e][automation] add test for vm resources requirment settings 2003641 - All metal ipi jobs are failing in 4.10 2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state 2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node 2003683 - Samples operator is panicking in CI 2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page 2003715 - Error on creating local volume set after selection of the volume mode 2003743 - Remove workaround keeping /boot RW for kdump support 2003775 - etcd pod on CrashLoopBackOff after master replacement procedure 2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver 2003792 - Monitoring metrics query graph flyover panel is useless 2003808 - Add Sprint 207 translations 2003845 - Project admin cannot access image vulnerabilities view 2003859 - sdn emits events with garbage messages 2003896 - (release-4.10) ApiRequestCounts conditional gatherer 2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas 2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes 2004059 - [e2e][automation] fix current tests for downstream 2004060 - Trying to use basic spring boot sample causes crash on Firefox 2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection 2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently 2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver 2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory 2004449 - Boot option recovery menu prevents image boot 2004451 - The backup filename displayed in the RecentBackup message is incorrect 2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts 2004508 - TuneD issues with the recent ConfigParser changes. 2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions 2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs 2004578 - Monitoring and node labels missing for an external storage platform 2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days 2004596 - [4.10] Bootimage bump tracker 2004597 - Duplicate ramdisk log containers running 2004600 - Duplicate ramdisk log containers running 2004609 - output of "crictl inspectp" is not complete 2004625 - BMC credentials could be logged if they change 2004632 - When LE takes a large amount of time, multiple whereabouts are seen 2004721 - ptp/worker custom threshold doesn't change ptp events threshold 2004736 - [knative] Create button on new Broker form is inactive despite form being filled 2004796 - [e2e][automation] add test for vm scheduling policy 2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque 2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card 2004901 - [e2e][automation] improve kubevirt devconsole tests 2004962 - Console frontend job consuming too much CPU in CI 2005014 - state of ODF StorageSystem is misreported during installation or uninstallation 2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines 2005179 - pods status filter is not taking effect 2005182 - sync list of deprecated apis about to be removed 2005282 - Storage cluster name is given as title in StorageSystem details page 2005355 - setuptools 58 makes Kuryr CI fail 2005407 - ClusterNotUpgradeable Alert should be set to Severity Info 2005415 - PTP operator with sidecar api configured throws bind: address already in use 2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console 2005554 - The switch status of the button "Show default project" is not revealed correctly in code 2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 2005761 - QE - Implementing crw-basic feature file 2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow 2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty 2005854 - SSH NodePort service is created for each VM 2005901 - KS, KCM and KA going Degraded during master nodes upgrade 2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user 2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics 2005971 - Change telemeter to report the Application Services product usage metrics 2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files 2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased 2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types 2006101 - Power off fails for drivers that don't support Soft power off 2006243 - Metal IPI upgrade jobs are running out of disk space 2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address 2006308 - Backing Store YAML tab on click displays a blank screen on UI 2006325 - Multicast is broken across nodes 2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators 2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource 2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception" 2006714 - add retry for etcd errors in kube-apiserver 2006767 - KubePodCrashLooping may not fire 2006803 - Set CoreDNS cache entries for forwarded zones 2006861 - Add Sprint 207 part 2 translations 2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap 2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors 2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded 2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick 2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails 2007271 - CI Integration for Knative test cases 2007289 - kubevirt tests are failing in CI 2007322 - Devfile/Dockerfile import does not work for unsupported git host 2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2008521 - gcp-hostname service should correct invalid search entries in resolv.conf 2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount 2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror 2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers 2008599 - Azure Stack UPI does not have Internal Load Balancer 2008612 - Plugin asset proxy does not pass through browser cache headers 2008712 - VPA webhook timeout prevents all pods from starting 2008733 - kube-scheduler: exposed /debug/pprof port 2008911 - Prometheus repeatedly scaling prometheus-operator replica set 2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12 2009055 - Instances of OCS to be replaced with ODF on UI 2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs 2009083 - opm blocks pruning of existing bundles during add 2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances 2009131 - [e2e][automation] add more test about vmi 2009148 - [e2e][automation] test vm nic presets and options 2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator 2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family 2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted 2009384 - UI changes to support BindableKinds CRD changes 2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped 2009424 - Deployment upgrade is failing availability check 2009454 - Change web terminal subscription permissions from get to list 2009465 - container-selinux should come from rhel8-appstream 2009514 - Bump OVS to 2.16-15 2009555 - Supermicro X11 system not booting from vMedia with AI 2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points 2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow 2009699 - Failure to validate flavor RAM 2009754 - Footer is not sticky anymore in import forms 2009785 - CRI-O's version file should be pinned by MCO 2009791 - Installer: ibmcloud ignores install-config values 2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13 2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo 2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2009873 - Stale Logical Router Policies and Annotations for a given node 2009879 - There should be test-suite coverage to ensure admin-acks work as expected 2009888 - SRO package name collision between official and community version 2010073 - uninstalling and then reinstalling sriov-network-operator is not working 2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. 2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default 2013751 - Service details page is showing wrong in-cluster hostname 2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page 2013871 - Resource table headings are not aligned with their column data 2013895 - Cannot enable accelerated network via MachineSets on Azure 2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude" 2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG) 2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain 2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab) 2013996 - Project detail page: Action "Delete Project" does nothing for the default project 2014071 - Payload imagestream new tags not properly updated during cluster upgrade 2014153 - SRIOV exclusive pooling 2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace 2014238 - AWS console test is failing on importing duplicate YAML definitions 2014245 - Several aria-labels, external links, and labels aren't internationalized 2014248 - Several files aren't internationalized 2014352 - Could not filter out machine by using node name on machines page 2014464 - Unexpected spacing/padding below navigation groups in developer perspective 2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages 2014486 - Integration Tests: OLM single namespace operator tests failing 2014488 - Custom operator cannot change orders of condition tables 2014497 - Regex slows down different forms and creates too much recursion errors in the log 2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id' 2014614 - Metrics scraping requests should be assigned to exempt priority level 2014710 - TestIngressStatus test is broken on Azure 2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly 2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile 2015115 - [RFE] PCI passthrough 2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter 2015154 - Support ports defined networks and primarySubnet 2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic 2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production 2015386 - Possibility to add labels to the built-in OCP alerts 2015395 - Table head on Affinity Rules modal is not fully expanded 2015416 - CI implementation for Topology plugin 2015418 - Project Filesystem query returns No datapoints found 2015420 - No vm resource in project view's inventory 2015422 - No conflict checking on snapshot name 2015472 - Form and YAML view switch button should have distinguishable status 2015481 - [4.10] sriov-network-operator daemon pods are failing to start 2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting 2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English 2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2017427 - NTO does not restart TuneD daemon when profile application is taking too long 2017535 - Broken Argo CD link image on GitOps Details Page 2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references 2017564 - On-prem prepender dispatcher script overwrites DNS search settings 2017565 - CCMO does not handle additionalTrustBundle on Azure Stack 2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice 2017606 - [e2e][automation] add test to verify send key for VNC console 2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes 2017656 - VM IP address is "undefined" under VM details -> ssh field 2017663 - SSH password authentication is disabled when public key is not supplied 2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP 2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set 2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource 2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults 2017761 - [e2e][automation] dummy bug for 4.9 test dependency 2017872 - Add Sprint 209 translations 2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances 2017879 - Add Chinese translation for "alternate" 2017882 - multus: add handling of pod UIDs passed from runtime 2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods 2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI 2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS 2018094 - the tooltip length is limited 2018152 - CNI pod is not restarted when It cannot start servers due to ports being used 2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time 2018234 - user settings are saved in local storage instead of on cluster 2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?) 2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports) 2018275 - Topology graph doesn't show context menu for Export CSV 2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked 2018380 - Migrate docs links to access.redhat.com 2018413 - Error: context deadline exceeded, OCP 4.8.9 2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked 2018445 - [e2e][automation] enhance tests for downstream 2018446 - [e2e][automation] move tests to different level 2018449 - [e2e][automation] add test about create/delete network attachment definition 2018490 - [4.10] Image provisioning fails with file name too long 2018495 - Fix typo in internationalization README 2018542 - Kernel upgrade does not reconcile DaemonSet 2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit 2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes 2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950 2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10 2018985 - The rootdisk size is 15Gi of windows VM in customize wizard 2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. 2019096 - Update SRO leader election timeout to support SNO 2019129 - SRO in operator hub points to wrong repo for README 2019181 - Performance profile does not apply 2019198 - ptp offset metrics are not named according to the log output 2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest 2019284 - Stop action should not in the action list while VMI is not running 2019346 - zombie processes accumulation and Argument list too long 2019360 - [RFE] Virtualization Overview page 2019452 - Logger object in LSO appends to existing logger recursively 2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect 2019634 - Pause and migration is enabled in action list for a user who has view only permission 2019636 - Actions in VM tabs should be disabled when user has view only permission 2019639 - "Take snapshot" should be disabled while VM image is still been importing 2019645 - Create button is not removed on "Virtual Machines" page for view only user 2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user 2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user 2019717 - cant delete VM with un-owned pvc attached 2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass 2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always" 2019744 - [RFE] Suggest users to download newest RHEL 8 version 2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level 2019827 - Display issue with top-level menu items running demo plugin 2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded 2019886 - Kuryr unable to finish ports recovery upon controller restart 2019948 - [RFE] Restructring Virtualization links 2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster 2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout 2019986 - Dynamic demo plugin fails to build 2019992 - instance:node_memory_utilisation:ratio metric is incorrect 2020001 - Update dockerfile for demo dynamic plugin to reflect dir change 2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation. 2020107 - cluster-version-operator: remove runlevel from CVO namespace 2020153 - Creation of Windows high performance VM fails 2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public 2020250 - Replacing deprecated ioutil 2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build 2020275 - ClusterOperators link in console returns blank page during upgrades 2020377 - permissions error while using tcpdump option with must-gather 2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined 2020498 - "Show PromQL" button is disabled 2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature 2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI 2020664 - DOWN subports are not cleaned up 2020904 - When trying to create a connection from the Developer view between VMs, it fails 2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana 2021017 - 404 page not found error on knative eventing page 2021031 - QE - Fix the topology CI scripts 2021048 - [RFE] Added MAC Spoof check 2021053 - Metallb operator presented as community operator 2021067 - Extensive number of requests from storage version operator in cluster 2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes 2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass 2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node 2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating 2021152 - imagePullPolicy is "Always" for ptp operator images 2021191 - Project admins should be able to list available network attachment defintions 2021205 - Invalid URL in git import form causes validation to not happen on URL change 2021322 - cluster-api-provider-azure should populate purchase plan information 2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind 2021364 - Installer requires invalid AWS permission s3:GetBucketReplication 2021400 - Bump documentationBaseURL to 4.10 2021405 - [e2e][automation] VM creation wizard Cloud Init editor 2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected 2021466 - [e2e][automation] Windows guest tool mount 2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver 2021551 - Build is not recognizing the USER group from an s2i image 2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character 2021629 - api request counts for current hour are incorrect 2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page 2021693 - Modals assigned modal-lg class are no longer the correct width 2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines 2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled 2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags 2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem 2022053 - dpdk application with vhost-net is not able to start 2022114 - Console logging every proxy request 2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent) 2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long 2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2022509 - getOverrideForManifest does not check manifest.GVK.Group 2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard 2022627 - Machine object not picking up external FIP added to an openstack vm 2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:' 2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox 2022801 - Add Sprint 210 translations 2022811 - Fix kubelet log rotation file handle leak 2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations 2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2022880 - Pipeline renders with minor visual artifact with certain task dependencies 2022886 - Incorrect URL in operator description 2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config 2023060 - [e2e][automation] Windows VM with CDROM migration 2023077 - [e2e][automation] Home Overview Virtualization status 2023090 - [e2e][automation] Examples of Import URL for VM templates 2023102 - [e2e][automation] Cloudinit disk of VM from custom template 2023216 - ACL for a deleted egressfirewall still present on node join switch 2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9 2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy 2023342 - SCC admission should take ephemeralContainers into account 2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden) 2023434 - Update Azure Machine Spec API to accept Marketplace Images 2023500 - Latency experienced while waiting for volumes to attach to node 2023522 - can't remove package from index: database is locked 2023560 - "Network Attachment Definitions" has no project field on the top in the list view 2023592 - [e2e][automation] add mac spoof check for nad 2023604 - ACL violation when deleting a provisioning-configuration resource 2023607 - console returns blank page when normal user without any projects visit Installed Operators page 2023638 - Downgrade support level for extended control plane integration to Dev Preview 2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10 2023675 - Changing CNV Namespace 2023779 - Fix Patch 104847 in 4.9 2023781 - initial hardware devices is not loading in wizard 2023832 - CCO updates lastTransitionTime for non-Status changes 2023839 - Bump recommended FCOS to 34.20211031.3.0 2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly 2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository 2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8 2024055 - External DNS added extra prefix for the TXT record 2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully 2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json 2024199 - 400 Bad Request error for some queries for the non admin user 2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode 2024262 - Sample catalog is not displayed when one API call to the backend fails 2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability 2024316 - modal about support displays wrong annotation 2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected 2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page 2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view 2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined 2024515 - test-blocker: Ceph-storage-plugin tests failing 2024535 - hotplug disk missing OwnerReference 2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image 2024547 - Detail page is breaking for namespace store , backing store and bucket class. 2024551 - KMS resources not getting created for IBM FlashSystem storage 2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel 2024613 - pod-identity-webhook starts without tls 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded 2024665 - Bindable services are not shown on topology 2024731 - linuxptp container: unnecessary checking of interfaces 2024750 - i18n some remaining OLM items 2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured 2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack 2024841 - test Keycloak with latest tag 2024859 - Not able to deploy an existing image from private image registry using developer console 2024880 - Egress IP breaks when network policies are applied 2024900 - Operator upgrade kube-apiserver 2024932 - console throws "Unauthorized" error after logging out 2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up 2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick 2025230 - ClusterAutoscalerUnschedulablePods should not be a warning 2025266 - CreateResource route has exact prop which need to be removed 2025301 - [e2e][automation] VM actions availability in different VM states 2025304 - overwrite storage section of the DV spec instead of the pvc section 2025431 - [RFE]Provide specific windows source link 2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36 2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node 2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local 2025481 - Update VM Snapshots UI 2025488 - [DOCS] Update the doc for nmstate operator installation 2025592 - ODC 4.9 supports invalid devfiles only 2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings" 2025767 - VMs orphaned during machineset scaleup 2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard 2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin- 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2026560 - Cluster-version operator does not remove unrecognized volume mounts 2026699 - fixed a bug with missing metadata 2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator 2026898 - Description/details are missing for Local Storage Operator 2027132 - Use the specific icon for Fedora and CentOS template 2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend 2027272 - KubeMemoryOvercommit alert should be human readable 2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group 2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue) 2027299 - The status of checkbox component is not revealed correctly in code 2027311 - K8s watch hooks do not work when fetching core resources 2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation 2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images 2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation 2027498 - [IBMCloud] SG Name character length limitation 2027501 - [4.10] Bootimage bump tracker 2027524 - Delete Application doesn't delete Channels or Brokers 2027563 - e2e/add-flow-ci.feature fix accessibility violations 2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges 2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions 2027685 - openshift-cluster-csi-drivers pods crashing on PSI 2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced 2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string 2027917 - No settings in hostfirmwaresettings and schema objects for masters 2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf 2027982 - nncp stucked at ConfigurationProgressing 2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters 2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed 2028030 - Panic detected in cluster-image-registry-operator pod 2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found" 2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9 2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin 2028141 - Console tests doesn't pass on Node.js 15 and 16 2028160 - Remove i18nKey in network-policy-peer-selectors.tsx 2028162 - Add Sprint 210 translations 2028170 - Remove leading and trailing whitespace 2028174 - Add Sprint 210 part 2 translations 2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it 2028217 - Cluster-version operator does not default Deployment replicas to one 2028240 - Multiple CatalogSources causing higher CPU use than necessary 2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings 2028325 - disableDrain should be set automatically on SNO 2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel 2028531 - Missing netFilter to the list of parameters when platform is OpenStack 2028610 - Installer doesn't retry on GCP rate limiting 2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting 2028695 - destroy cluster does not prune bootstrap instance profile 2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs 2028802 - CRI-O panic due to invalid memory address or nil pointer dereference 2028816 - VLAN IDs not released on failures 2028881 - Override not working for the PerformanceProfile template 2028885 - Console should show an error context if it logs an error object 2028949 - Masthead dropdown item hover text color is incorrect 2028963 - Whereabouts should reconcile stranded IP addresses 2029034 - enabling ExternalCloudProvider leads to inoperative cluster 2029178 - Create VM with wizard - page is not displayed 2029181 - Missing CR from PGT 2029273 - wizard is not able to use if project field is "All Projects" 2029369 - Cypress tests github rate limit errors 2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out 2029394 - missing empty text for hardware devices at wizard review 2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used 2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl 2029521 - EFS CSI driver cannot delete volumes under load 2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle 2029579 - Clicking on an Application which has a Helm Release in it causes an error 2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE 2029645 - Sync upstream 1.15.0 downstream 2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing 2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip 2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage 2029785 - CVO panic when an edge is included in both edges and conditionaledges 2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp) 2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error 2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2030228 - Fix StorageSpec resources field to use correct API 2030229 - Mirroring status card reflect wrong data 2030240 - Hide overview page for non-privileged user 2030305 - Export App job do not completes 2030347 - kube-state-metrics exposes metrics about resource annotations 2030364 - Shared resource CSI driver monitoring is not setup correctly 2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets 2030534 - Node selector/tolerations rules are evaluated too early 2030539 - Prometheus is not highly available 2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing 2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation 2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates. 2030677 - BOND CNI: There is no option to configure MTU on a Bond interface 2030692 - NPE in PipelineJobListener.upsertWorkflowJob 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2030847 - PerformanceProfile API version should be v2 2030961 - Customizing the OAuth server URL does not apply to upgraded cluster 2031006 - Application name input field is not autofocused when user selects "Create application" 2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex 2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue 2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip 2031060 - Failing CSR Unit test due to expired test certificate 2031085 - ovs-vswitchd running more threads than expected 2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1 2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2031502 - [RFE] New common templates crash the ui 2031685 - Duplicated forward upstreams should be removed from the dns operator 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive 2031797 - [RFE] Order and text of Boot source type input are wrong 2031826 - CI tests needed to confirm driver-toolkit image contents 2031831 - OCP Console - Global CSS overrides affecting dynamic plugins 2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional 2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled) 2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain) 2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself 2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource 2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64 2032141 - open the alertrule link in new tab, got empty page 2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy 2032296 - Cannot create machine with ephemeral disk on Azure 2032407 - UI will show the default openshift template wizard for HANA template 2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded 2032421 - [RFE] UI integration with automatic updated images 2032516 - Not able to import git repo with .devfile.yaml 2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource 2032547 - hardware devices table have filter when table is empty 2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool 2032566 - Cluster-ingress-router does not support Azure Stack 2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso 2032589 - DeploymentConfigs ignore resolve-names annotation 2032732 - Fix styling conflicts due to recent console-wide CSS changes 2032831 - Knative Services and Revisions are not shown when Service has no ownerReference 2032851 - Networking is "not available" in Virtualization Overview 2032926 - Machine API components should use K8s 1.23 dependencies 2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24 2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster 2033013 - Project dropdown in user preferences page is broken 2033044 - Unable to change import strategy if devfile is invalid 2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable 2033111 - IBM VPC operator library bump removed global CLI args 2033138 - "No model registered for Templates" shows on customize wizard 2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected 2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected 2033257 - unable to use configmap for helm charts 2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered 2033290 - Product builds for console are failing 2033382 - MAPO is missing machine annotations 2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations 2033403 - Devfile catalog does not show provider information 2033404 - Cloud event schema is missing source type and resource field is using wrong value 2033407 - Secure route data is not pre-filled in edit flow form 2033422 - CNO not allowing LGW conversion from SGW in runtime 2033434 - Offer darwin/arm64 oc in clidownloads 2033489 - CCM operator failing on baremetal platform 2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver 2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains 2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady 2033538 - Gather Cost Management Metrics Custom Resource 2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined 2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page 2033634 - list-style-type: disc is applied to the modal dropdowns 2033720 - Update samples in 4.10 2033728 - Bump OVS to 2.16.0-33 2033729 - remove runtime request timeout restriction for azure 2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended 2033749 - Azure Stack Terraform fails without Local Provider 2033750 - Local volume should pull multi-arch image for kube-rbac-proxy 2033751 - Bump kubernetes to 1.23 2033752 - make verify fails due to missing yaml-patch 2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource 2034004 - [e2e][automation] add tests for VM snapshot improvements 2034068 - [e2e][automation] Enhance tests for 4.10 downstream 2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore 2034097 - [OVN] After edit EgressIP object, the status is not correct 2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning 2034129 - blank page returned when clicking 'Get started' button 2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0 2034153 - CNO does not verify MTU migration for OpenShiftSDN 2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled 2034170 - Use function.knative.dev for Knative Functions related labels 2034190 - unable to add new VirtIO disks to VMs 2034192 - Prometheus fails to insert reporting metrics when the sample limit is met 2034243 - regular user cant load template list 2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version" 2034248 - GPU/Host device modal is too small 2034257 - regular user Create VM missing permissions alert 2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere 2034300 - Du validator policy is NonCompliant after DU configuration completed 2034319 - Negation constraint is not validating packages 2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology 2034350 - The CNO should implement the Whereabouts IP reconciliation cron job 2034362 - update description of disk interface 2034398 - The Whereabouts IPPools CRD should include the podref field 2034409 - Default CatalogSources should be pointing to 4.10 index images 2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics 2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode 2034460 - Summary: cloud-network-config-controller does not account for different environment 2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true 2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly 2034493 - Change cluster version operator log level 2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list 2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6 2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer 2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART 2034537 - Update team 2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds 2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success 2034577 - Current OVN gateway mode should be reflected on node annotation as well 2034621 - context menu not popping up for application group 2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10 2034624 - Warn about unsupported CSI driver in vsphere operator 2034647 - missing volumes list in snapshot modal 2034648 - Rebase openshift-controller-manager to 1.23 2034650 - Rebase openshift/builder to 1.23 2034705 - vSphere: storage e2e tests logging configuration data 2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. 2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment 2034785 - ptpconfig with summary_interval cannot be applied 2034823 - RHEL9 should be starred in template list 2034838 - An external router can inject routes if no service is added 2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent 2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty 2034881 - Cloud providers components should use K8s 1.23 dependencies 2034884 - ART cannot build the image because it tries to download controller-gen 2034889 - oc adm prune deployments does not work 2034898 - Regression in recently added Events feature 2034957 - update openshift-apiserver to kube 1.23.1 2035015 - ClusterLogForwarding CR remains stuck remediating forever 2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster 2035141 - [RFE] Show GPU/Host devices in template's details tab 2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC 2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting 2035199 - IPv6 support in mtu-migration-dispatcher.yaml 2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing 2035250 - Peering with ebgp peer over multi-hops doesn't work 2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices 2035315 - invalid test cases for AWS passthrough mode 2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env 2035321 - Add Sprint 211 translations 2035326 - [ExternalCloudProvider] installation with additional network on workers fails 2035328 - Ccoctl does not ignore credentials request manifest marked for deletion 2035333 - Kuryr orphans ports on 504 errors from Neutron 2035348 - Fix two grammar issues in kubevirt-plugin.json strings 2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets 2035409 - OLM E2E test depends on operator package that's no longer published 2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address 2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1' 2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster 2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page 2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers 2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class 2035602 - [e2e][automation] add tests for Virtualization Overview page cards 2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly 2035704 - RoleBindings list page filter doesn't apply 2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing. 2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed 2035772 - AccessMode and VolumeMode is not reserved for customize wizard 2035847 - Two dashes in the Cronjob / Job pod name 2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml 2035882 - [BIOS setting values] Create events for all invalid settings in spec 2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests” 2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen 2035927 - Cannot enable HighNodeUtilization scheduler profile 2035933 - volume mode and access mode are empty in customize wizard review tab 2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods 2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation 2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error 2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential 2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend 2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes 2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23 2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23 2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments 2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists 2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected 2036826 - oc adm prune deployments can prune the RC/RS 2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform 2036861 - kube-apiserver is degraded while enable multitenant 2036937 - Command line tools page shows wrong download ODO link 2036940 - oc registry login fails if the file is empty or stdout 2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container 2036989 - Route URL copy to clipboard button wraps to a separate line by itself 2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters 2036993 - Machine API components should use Go lang version 1.17 2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. 2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api 2037073 - Alertmanager container fails to start because of startup probe never being successful 2037075 - Builds do not support CSI volumes 2037167 - Some log level in ibm-vpc-block-csi-controller are hard code 2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles 2037182 - PingSource badge color is not matched with knativeEventing color 2037203 - "Running VMs" card is too small in Virtualization Overview 2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly 2037237 - Add "This is a CD-ROM boot source" to customize wizard 2037241 - default TTL for noobaa cache buckets should be 0 2037246 - Cannot customize auto-update boot source 2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately 2037288 - Remove stale image reference 2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources 2037483 - Rbacs for Pods within the CBO should be more restrictive 2037484 - Bump dependencies to k8s 1.23 2037554 - Mismatched wave number error message should include the wave numbers that are in conflict 2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform] 2037635 - impossible to configure custom certs for default console route in ingress config 2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8 2037638 - Builds do not support CSI volumes as volume sources 2037664 - text formatting issue in Installed Operators list table 2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037801 - Serverless installation is failing on CI jobs for e2e tests 2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format 2037856 - use lease for leader election 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10 2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests 2037904 - upgrade operator deployment failed due to memory limit too low for manager container 2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation] 2038034 - non-privileged user cannot see auto-update boot source 2038053 - Bump dependencies to k8s 1.23 2038088 - Remove ipa-downloader references 2038160 - The default project missed the annotation : openshift.io/node-selector: "" 2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional 2038196 - must-gather is missing collecting some metal3 resources 2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777) 2038253 - Validator Policies are long lived 2038272 - Failures to build a PreprovisioningImage are not reported 2038384 - Azure Default Instance Types are Incorrect 2038389 - Failing test: [sig-arch] events should not repeat pathologically 2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket 2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips 2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained 2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect 2038663 - update kubevirt-plugin OWNERS 2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new" 2038705 - Update ptp reviewers 2038761 - Open Observe->Targets page, wait for a while, page become blank 2038768 - All the filters on the Observe->Targets page can't work 2038772 - Some monitors failed to display on Observe->Targets page 2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node 2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard 2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation 2038864 - E2E tests fail because multi-hop-net was not created 2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console 2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured 2038968 - Move feature gates from a carry patch to openshift/api 2039056 - Layout issue with breadcrumbs on API explorer page 2039057 - Kind column is not wide enough in API explorer page 2039064 - Bulk Import e2e test flaking at a high rate 2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled 2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters 2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost 2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy 2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator 2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade 2039227 - Improve image customization server parameter passing during installation 2039241 - Improve image customization server parameter passing during installation 2039244 - Helm Release revision history page crashes the UI 2039294 - SDN controller metrics cannot be consumed correctly by prometheus 2039311 - oc Does Not Describe Build CSI Volumes 2039315 - Helm release list page should only fetch secrets for deployed charts 2039321 - SDN controller metrics are not being consumed by prometheus 2039330 - Create NMState button doesn't work in OperatorHub web console 2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations 2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead 2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced 2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message 2040150 - Update ConfigMap keys for IBM HPCS 2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth 2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository 2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp 2040376 - "unknown instance type" error for supported m6i.xlarge instance 2040394 - Controller: enqueue the failed configmap till services update 2040467 - Cannot build ztp-site-generator container image 2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4 2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps 2040535 - Auto-update boot source is not available in customize wizard 2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name 2040603 - rhel worker scaleup playbook failed because missing some dependency of podman 2040616 - rolebindings page doesn't load for normal users 2040620 - [MAPO] Error pulling MAPO image on installation 2040653 - Topology sidebar warns that another component is updated while rendering 2040655 - User settings update fails when selecting application in topology sidebar 2040661 - Different react warnings about updating state on unmounted components when leaving topology 2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation 2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi 2040694 - Three upstream HTTPClientConfig struct fields missing in the operator 2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers 2040710 - cluster-baremetal-operator cannot update BMC subscription CR 2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms 2040782 - Import YAML page blocks input with more then one generateName attribute 2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute 2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator 2040793 - Fix snapshot e2e failures 2040880 - do not block upgrades if we can't connect to vcenter 2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10 2041093 - autounattend.xml missing 2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates 2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped" 2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23 2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller 2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener 2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud 2041466 - Kubedescheduler version is missing from the operator logs 2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses 2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods) 2041492 - Spacing between resources in inventory card is too small 2041509 - GCP Cloud provider components should use K8s 1.23 dependencies 2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook 2041541 - audit: ManagedFields are dropped using API not annotation 2041546 - ovnkube: set election timer at RAFT cluster creation time 2041554 - use lease for leader election 2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected" 2041583 - etcd and api server cpu mask interferes with a guaranteed workload 2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure 2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation 2041620 - bundle CSV alm-examples does not parse 2041641 - Fix inotify leak and kubelet retaining memory 2041671 - Delete templates leads to 404 page 2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category 2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled 2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint 2041763 - The Observe > Alerting pages no longer have their default sort order applied 2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken 2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied 2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster 2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases 2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist 2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen 2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile 2041999 - [PROXY] external dns pod cannot recognize custom proxy CA 2042001 - unexpectedly found multiple load balancers 2042029 - kubedescheduler fails to install completely 2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters 2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs 2042059 - update discovery burst to reflect lots of CRDs on openshift clusters 2042069 - Revert toolbox to rhcos-toolbox 2042169 - Can not delete egressnetworkpolicy in Foreground propagation 2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud 2042274 - Storage API should be used when creating a PVC 2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection 2042366 - Lifecycle hooks should be independently managed 2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway 2042382 - [e2e][automation] CI takes more then 2 hours to run 2042395 - Add prerequisites for active health checks test 2042438 - Missing rpms in openstack-installer image 2042466 - Selection does not happen when switching from Topology Graph to List View 2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver 2042567 - insufficient info on CodeReady Containers configuration 2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk 2042619 - Overview page of the console is broken for hypershift clusters 2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running 2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud 2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud 2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly 2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring) 2042851 - Create template from SAP HANA template flow - VM is created instead of a new template 2042906 - Edit machineset with same machine deletion hook name succeed 2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal" 2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways' 2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2043043 - Cluster Autoscaler should use K8s 1.23 dependencies 2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props) 2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects". 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2044201 - Templates golden image parameters names should be supported 2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8] 2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied" 2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects 2044347 - Bump to kubernetes 1.23.3 2044481 - collect sharedresource cluster scoped instances with must-gather 2044496 - Unable to create hardware events subscription - failed to add finalizers 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2044680 - Additional libovsdb performance and resource consumption fixes 2044704 - Observe > Alerting pages should not show runbook links in 4.10 2044717 - [e2e] improve tests for upstream test environment 2044724 - Remove namespace column on VM list page when a project is selected 2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff 2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD 2045024 - CustomNoUpgrade alerts should be ignored 2045112 - vsphere-problem-detector has missing rbac rules for leases 2045199 - SnapShot with Disk Hot-plug hangs 2045561 - Cluster Autoscaler should use the same default Group value as Cluster API 2045591 - Reconciliation of aws pod identity mutating webhook did not happen 2045849 - Add Sprint 212 translations 2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10 2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin 2045916 - [IBMCloud] Default machine profile in installer is unreliable 2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment 2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify 2046137 - oc output for unknown commands is not human readable 2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance 2046297 - Bump DB reconnect timeout 2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations 2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors 2046626 - Allow setting custom metrics for Ansible-based Operators 2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud 2047025 - Installation fails because of Alibaba CSI driver operator is degraded 2047190 - Bump Alibaba CSI driver for 4.10 2047238 - When using communities and localpreferences together, only localpreference gets applied 2047255 - alibaba: resourceGroupID not found 2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions 2047317 - Update HELM OWNERS files under Dev Console 2047455 - [IBM Cloud] Update custom image os type 2047496 - Add image digest feature 2047779 - do not degrade cluster if storagepolicy creation fails 2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047929 - use lease for leader election 2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2048048 - Application tab in User Preferences dropdown menus are too wide. 2048050 - Topology list view items are not highlighted on keyboard navigation 2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2048413 - Bond CNI: Failed to attach Bond NAD to pod 2048443 - Image registry operator panics when finalizes config deletion 2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-* 2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2048598 - Web terminal view is broken 2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2048891 - Topology page is crashed 2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2049043 - Cannot create VM from template 2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2049886 - Placeholder bug for OCP 4.10.0 metadata release 2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members 2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2050370 - alert data for burn budget needs to be updated to prevent regression 2050393 - ZTP missing support for local image registry and custom machine config 2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2050737 - Remove metrics and events for master port offsets 2050801 - Vsphere upi tries to access vsphere during manifests generation phase 2050883 - Logger object in LSO does not log source location accurately 2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 2052062 - Whereabouts should implement client-go 1.22+ 2052125 - [4.10] Crio appears to be coredumping in some scenarios 2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch 2052756 - [4.10] PVs are not being cleaned up after PVC deletion 2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2053268 - inability to detect static lifecycle failure 2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053323 - OpenShift-Ansible BYOH Unit Tests are Broken 2053339 - Remove dev preview badge from IBM FlashSystem deployment windows 2053751 - ztp-site-generate container is missing convenience entrypoint 2053945 - [4.10] Failed to apply sriov policy on intel nics 2054109 - Missing "app" label 2054154 - RoleBinding in project without subject is causing "Project access" page to fail 2054244 - Latest pipeline run should be listed on the top of the pipeline run list 2054288 - console-master-e2e-gcp-console is broken 2054562 - DPU network operator 4.10 branch need to sync with master 2054897 - Unable to deploy hw-event-proxy operator 2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2055371 - Remove Check which enforces summary_interval must match logSyncInterval 2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API 2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2056479 - ovirt-csi-driver-node pods are crashing intermittently 2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope" 2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes" 2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2056948 - post 1.23 rebase: regression in service-load balancer reliability 2057438 - Service Level Agreement (SLA) always show 'Unknown' 2057721 - Fix Proxy support in RHACM 2.4.2 2057724 - Image creation fails when NMstateConfig CR is empty 2058641 - [4.10] Pod density test causing problems when using kube-burner 2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc

  1. References:

https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2020-07-15-3 tvOS 13.4.8

tvOS 13.4.8 is now available and addresses the following:

Audio Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-9889: JunDong Xie and XingWei Li of Ant-financial Light-Year Security Lab

Audio Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-9888: JunDong Xie and XingWei Li of Ant-financial Light-Year Security Lab CVE-2020-9890: JunDong Xie and XingWei Li of Ant-financial Light-Year Security Lab CVE-2020-9891: JunDong Xie and XingWei Li of Ant-financial Light-Year Security Lab

AVEVideoEncoder Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed by removing the vulnerable code. CVE-2020-9907: an anonymous researcher

Crash Reporter Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may be able to break out of its sandbox Description: A memory corruption issue was addressed by removing the vulnerable code. CVE-2020-9865: Zhuo Liang of Qihoo 360 Vulcan Team working with 360 BugCloud

GeoServices Available for: Apple TV 4K and Apple TV HD Impact: A malicious application may be able to read sensitive location information Description: An authorization issue was addressed with improved state management. CVE-2020-9933: Min (Spark) Zheng and Xiaolong Bai of Alibaba Inc.

iAP Available for: Apple TV 4K and Apple TV HD Impact: An attacker in a privileged network position may be able to execute arbitrary code Description: An input validation issue existed in Bluetooth. CVE-2020-9914: Andy Davis of NCC Group

ImageIO Available for: Apple TV 4K and Apple TV HD Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2020-9936: Mickey Jin of Trend Micro

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An attacker in a privileged network position may be able to inject into active connections within a VPN tunnel Description: A routing issue was addressed with improved restrictions. CVE-2019-14899: William J. Tolley, Beau Kujath, and Jedidiah R. Crandall

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An attacker that has already achieved kernel code execution may be able to bypass kernel memory mitigations Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2020-9909: Brandon Azad of Google Project Zero

WebKit Available for: Apple TV 4K and Apple TV HD Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-9894: 0011 working with Trend Micro Zero Day Initiative

WebKit Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may prevent Content Security Policy from being enforced Description: An access issue existed in Content Security Policy. CVE-2020-9915: an anonymous researcher

WebKit Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2020-9893: 0011 working with Trend Micro Zero Day Initiative CVE-2020-9895: Wen Xu of SSLab, Georgia Tech

WebKit Available for: Apple TV 4K and Apple TV HD Impact: A malicious attacker with arbitrary read and write capability may be able to bypass Pointer Authentication Description: Multiple issues were addressed with improved logic. CVE-2020-9910: Samuel Groß of Google Project Zero

WebKit Page Loading Available for: Apple TV 4K and Apple TV HD Impact: A malicious attacker may be able to conceal the destination of a URL Description: A URL Unicode encoding issue was addressed with improved state management. CVE-2020-9916: Rakesh Mane (@RakeshMane10)

WebKit Web Inspector Available for: Apple TV 4K and Apple TV HD Impact: Copying a URL from Web Inspector may lead to command injection Description: A command injection issue existed in Web Inspector. CVE-2020-9862: Ophir Lojkine (@lovasoa)

Wi-Fi Available for: Apple TV 4K and Apple TV HD Impact: A remote attacker may be able to cause unexpected system termination or corrupt kernel memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-9918: Jianjun Dai of 360 Alpha Lab working with 360 BugCloud (bugcloud.360.cn)

Additional recognition

Kernel We would like to acknowledge Brandon Azad of Google Project Zero for their assistance.

Installation note:

Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software."

To check the current version of software, select "Settings -> General -> About." -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl8PhQUACgkQBz4uGe3y 0M33Vw//fmvay18+s9sn8Gv2VfSgT2VcmDHMNTch9QoYbm7spSflAc8zWdToUOpK fiJAVEHB+adcGy3syi4z+utNf3l1XchVMuaxLKzDyS7LDiDIwczivrr642A+ahlk vrHXcdwQkf0Y3QdQF9DwcOfzyNvaRRJ2eICKlrjm4BrcoP63eoBTGKgcZp6EAOQu c0X5M2F2GcV4VwSmSuzKtsNlkjWlaD55meVWjGZGGUp4d0tk0BtmAWISAXf2NfFF WQyKQ9snXMzzF4SRA3cbWqFFluKDYyPx7Lh2jLB+KcrTRCtuMi+cAu3QQezRwIUD LnKzLbAbOO8Mu67aLjoBdW0IdCHbGpdK6I/aGi0eV029+tBdcn5UOfPIhGT9WDkQ tlDr5RCqWvc02F6e5SetIGRY1YGV6DWqo0U1h6cBdVgnx5g3aIZzXihATMV+4bxj Vijf8iDG5LsO4Bx8g1aekrn37OQnr7WuFHLZrHKZyQejn6IdOQ2fyzH43/0mLiE3 eaoGwghlFXhOpbUx26owjEkDuC5GgboctjefqtJ9Zu7yfSS2GDAq23Qp9IXy/Avf cIIB0bnz9Mk+2qrZ2GDZXBePacLoVSNvaBywyrs6MMANrsi3Ioq3xug8b8WnTozL lMrdAVr64+qTn0YTc6QwNs9golbRQh3z2U6Hk/niQXlWZilaK/s= =+zqK -----END PGP SIGNATURE-----

. Relevant releases/architectures:

Red Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

GNOME is the default desktop environment of Red Hat Enterprise Linux.

The following packages have been upgraded to a later upstream version: gnome-remote-desktop (0.1.8), pipewire (0.3.6), vte291 (0.52.4), webkit2gtk3 (2.28.4), xdg-desktop-portal (1.6.0), xdg-desktop-portal-gtk (1.6.0).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.3 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

GDM must be restarted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):

1207179 - Select items matching non existing pattern does not unselect already selected 1566027 - can't correctly compute contents size if hidden files are included 1569868 - Browsing samba shares using gvfs is very slow 1652178 - [RFE] perf-tool run on wayland 1656262 - The terminal's character display is unclear on rhel8 guest after installing gnome 1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled 1692536 - login screen shows after gnome-initial-setup 1706008 - Sound Effect sometimes fails to change to selected option. 1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. 1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined 1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly 1758891 - tracker-devel subpackage missing from el8 repos 1775345 - Rebase xdg-desktop-portal to 1.6 1778579 - Nautilus does not respect umask settings. 1779691 - Rebase xdg-desktop-portal-gtk to 1.6 1794045 - There are two different high contrast versions of desktop icons 1804719 - Update vte291 to 0.52.4 1805929 - RHEL 8.1 gnome-shell-extension errors 1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp 1814820 - No checkbox to install updates in the shutdown dialog 1816070 - "search for an application to open this file" dialog broken 1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution 1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1817143 - Rebase WebKitGTK to 2.28 1820759 - Include IO stall fixes 1820760 - Include IO fixes 1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening 1827030 - gnome-settings-daemon: subscription notification on CentOS Stream 1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content 1832347 - [Rebase] Rebase pipewire to 0.3.x 1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install 1837381 - Backport screen cast improvements to 8.3 1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version 1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6 1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113 1840080 - Can not control top bar menus via keys in Wayland 1840788 - [flatpak][rhel8] unable to build potrace as dependency 1843486 - Software crash after clicking Updates tab 1844578 - anaconda very rarely crashes at startup with a pygobject traceback 1846191 - usb adapters hotplug crashes gnome-shell 1847051 - JS ERROR: TypeError: area is null 1847061 - File search doesn't work under certain locales 1847062 - gnome-remote-desktop crash on QXL graphics 1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV 1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow 1854734 - PipeWire 0.2 should be required by xdg-desktop-portal 1866332 - Remove obsolete libusb-devel dependency 1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at "Started GNOME Display Manager" - GDM regression issue. 1872270 - WebKit renderer hangs on Cockpit 1873093 - CVE-2020-14391 gnome-settings-daemon: Red Hat Customer Portal password logged and passed as command line argument when user registers through GNOME control center 1873963 - Failed to start session: org.gnome.Mutter.ScreenCast API version 2 lower than minimum supported version 3 1876462 - CVE-2020-3885 webkitgtk: Incorrect processing of file URLs 1876463 - CVE-2020-3894 webkitgtk: Race condition allows reading of restricted memory 1876465 - CVE-2020-3895 webkitgtk: Memory corruption triggered by a malicious web content 1876468 - CVE-2020-3897 webkitgtk: Type confusion leading to arbitrary code execution 1876470 - CVE-2020-3899 webkitgtk: Memory consumption issue leading to arbitrary code execution 1876472 - CVE-2020-3900 webkitgtk: Memory corruption triggered by a malicious web content 1876473 - CVE-2020-3901 webkitgtk: Type confusion leading to arbitrary code execution 1876476 - CVE-2020-3902 webkitgtk: Input validation issue leading to cross-site script attack 1876516 - CVE-2020-3862 webkitgtk: Denial of service via incorrect memory handling 1876518 - CVE-2020-3864 webkitgtk: Non-unique security origin for DOM object contexts 1876521 - CVE-2020-3865 webkitgtk: Incorrect security check for a top-level DOM object context 1876522 - CVE-2020-3867 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876523 - CVE-2020-3868 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876536 - CVE-2019-8710 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876537 - CVE-2019-8743 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876540 - CVE-2019-8764 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876543 - CVE-2019-8766 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876545 - CVE-2019-8782 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876548 - CVE-2019-8783 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876549 - CVE-2019-8808 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876550 - CVE-2019-8811 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876552 - CVE-2019-8812 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876553 - CVE-2019-8813 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876554 - CVE-2019-8814 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876555 - CVE-2019-8815 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876556 - CVE-2019-8816 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876590 - CVE-2019-8819 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876591 - CVE-2019-8820 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876594 - CVE-2019-8823 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876607 - CVE-2019-8625 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876611 - CVE-2019-8720 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876617 - CVE-2019-8769 webkitgtk: Websites could reveal browsing history 1876619 - CVE-2019-8771 webkitgtk: Violation of iframe sandboxing policy 1877853 - File descriptors are being left behind on logout of RHEL 8 session 1879532 - CVE-2020-9862 webkitgtk: Command injection in web inspector 1879535 - CVE-2020-9893 webkitgtk: Use-after-free may lead to application termination or arbitrary code execution 1879536 - CVE-2020-9894 webkitgtk: Out-of-bounds read may lead to unexpected application termination or arbitrary code execution 1879538 - CVE-2020-9895 webkitgtk: Use-after-free may lead to application termination or arbitrary code execution 1879540 - CVE-2020-9915 webkitgtk: Access issue in content security policy 1879541 - CVE-2020-9925 webkitgtk: A logic issue may lead to cross site scripting 1879545 - CVE-2020-9802 webkitgtk: Logic issue may lead to arbitrary code execution 1879557 - CVE-2020-9803 webkitgtk: Memory corruption may lead to arbitrary code execution 1879559 - CVE-2020-9805 webkitgtk: Logic issue may lead to cross site scripting 1879563 - CVE-2020-9806 webkitgtk: Memory corruption may lead to arbitrary code execution 1879564 - CVE-2020-9807 webkitgtk: Memory corruption may lead to arbitrary code execution 1879566 - CVE-2020-9843 webkitgtk: Input validation issue may lead to cross site scripting 1879568 - CVE-2020-9850 webkitgtk: Logic issue may lead to arbitrary code execution 1880339 - Right GLX stereo texture is potentially leaked for each closed window

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: LibRaw-0.19.5-2.el8.src.rpm PackageKit-1.1.12-6.el8.src.rpm dleyna-renderer-0.6.0-3.el8.src.rpm frei0r-plugins-1.6.1-7.el8.src.rpm gdm-3.28.3-34.el8.src.rpm gnome-control-center-3.28.2-22.el8.src.rpm gnome-photos-3.28.1-3.el8.src.rpm gnome-remote-desktop-0.1.8-3.el8.src.rpm gnome-session-3.28.1-10.el8.src.rpm gnome-settings-daemon-3.32.0-11.el8.src.rpm gnome-shell-3.32.2-20.el8.src.rpm gnome-shell-extensions-3.32.1-11.el8.src.rpm gnome-terminal-3.28.3-2.el8.src.rpm gtk3-3.22.30-6.el8.src.rpm gvfs-1.36.2-10.el8.src.rpm mutter-3.32.2-48.el8.src.rpm nautilus-3.28.1-14.el8.src.rpm pipewire-0.3.6-1.el8.src.rpm pipewire0.2-0.2.7-6.el8.src.rpm potrace-1.15-3.el8.src.rpm tracker-2.1.5-2.el8.src.rpm vte291-0.52.4-2.el8.src.rpm webkit2gtk3-2.28.4-1.el8.src.rpm webrtc-audio-processing-0.3-9.el8.src.rpm xdg-desktop-portal-1.6.0-2.el8.src.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.src.rpm

aarch64: PackageKit-1.1.12-6.el8.aarch64.rpm PackageKit-command-not-found-1.1.12-6.el8.aarch64.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-cron-1.1.12-6.el8.aarch64.rpm PackageKit-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-debugsource-1.1.12-6.el8.aarch64.rpm PackageKit-glib-1.1.12-6.el8.aarch64.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.aarch64.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.aarch64.rpm PackageKit-gtk3-module-1.1.12-6.el8.aarch64.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.aarch64.rpm frei0r-plugins-1.6.1-7.el8.aarch64.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.aarch64.rpm frei0r-plugins-debugsource-1.6.1-7.el8.aarch64.rpm frei0r-plugins-opencv-1.6.1-7.el8.aarch64.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.aarch64.rpm gdm-3.28.3-34.el8.aarch64.rpm gdm-debuginfo-3.28.3-34.el8.aarch64.rpm gdm-debugsource-3.28.3-34.el8.aarch64.rpm gnome-control-center-3.28.2-22.el8.aarch64.rpm gnome-control-center-debuginfo-3.28.2-22.el8.aarch64.rpm gnome-control-center-debugsource-3.28.2-22.el8.aarch64.rpm gnome-remote-desktop-0.1.8-3.el8.aarch64.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.aarch64.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.aarch64.rpm gnome-session-3.28.1-10.el8.aarch64.rpm gnome-session-debuginfo-3.28.1-10.el8.aarch64.rpm gnome-session-debugsource-3.28.1-10.el8.aarch64.rpm gnome-session-wayland-session-3.28.1-10.el8.aarch64.rpm gnome-session-xsession-3.28.1-10.el8.aarch64.rpm gnome-settings-daemon-3.32.0-11.el8.aarch64.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.aarch64.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.aarch64.rpm gnome-shell-3.32.2-20.el8.aarch64.rpm gnome-shell-debuginfo-3.32.2-20.el8.aarch64.rpm gnome-shell-debugsource-3.32.2-20.el8.aarch64.rpm gnome-terminal-3.28.3-2.el8.aarch64.rpm gnome-terminal-debuginfo-3.28.3-2.el8.aarch64.rpm gnome-terminal-debugsource-3.28.3-2.el8.aarch64.rpm gnome-terminal-nautilus-3.28.3-2.el8.aarch64.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.aarch64.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.aarch64.rpm gtk-update-icon-cache-3.22.30-6.el8.aarch64.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-3.22.30-6.el8.aarch64.rpm gtk3-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-debugsource-3.22.30-6.el8.aarch64.rpm gtk3-devel-3.22.30-6.el8.aarch64.rpm gtk3-devel-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-immodule-xim-3.22.30-6.el8.aarch64.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.aarch64.rpm gtk3-tests-debuginfo-3.22.30-6.el8.aarch64.rpm gvfs-1.36.2-10.el8.aarch64.rpm gvfs-afc-1.36.2-10.el8.aarch64.rpm gvfs-afc-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-afp-1.36.2-10.el8.aarch64.rpm gvfs-afp-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-archive-1.36.2-10.el8.aarch64.rpm gvfs-archive-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-client-1.36.2-10.el8.aarch64.rpm gvfs-client-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-debugsource-1.36.2-10.el8.aarch64.rpm gvfs-devel-1.36.2-10.el8.aarch64.rpm gvfs-fuse-1.36.2-10.el8.aarch64.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-goa-1.36.2-10.el8.aarch64.rpm gvfs-goa-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-gphoto2-1.36.2-10.el8.aarch64.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-mtp-1.36.2-10.el8.aarch64.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.aarch64.rpm gvfs-smb-1.36.2-10.el8.aarch64.rpm gvfs-smb-debuginfo-1.36.2-10.el8.aarch64.rpm libsoup-debuginfo-2.62.3-2.el8.aarch64.rpm libsoup-debugsource-2.62.3-2.el8.aarch64.rpm libsoup-devel-2.62.3-2.el8.aarch64.rpm mutter-3.32.2-48.el8.aarch64.rpm mutter-debuginfo-3.32.2-48.el8.aarch64.rpm mutter-debugsource-3.32.2-48.el8.aarch64.rpm mutter-tests-debuginfo-3.32.2-48.el8.aarch64.rpm nautilus-3.28.1-14.el8.aarch64.rpm nautilus-debuginfo-3.28.1-14.el8.aarch64.rpm nautilus-debugsource-3.28.1-14.el8.aarch64.rpm nautilus-extensions-3.28.1-14.el8.aarch64.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.aarch64.rpm pipewire-0.3.6-1.el8.aarch64.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-debugsource-0.3.6-1.el8.aarch64.rpm pipewire-devel-0.3.6-1.el8.aarch64.rpm pipewire-doc-0.3.6-1.el8.aarch64.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-libs-0.3.6-1.el8.aarch64.rpm pipewire-libs-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire-utils-0.3.6-1.el8.aarch64.rpm pipewire-utils-debuginfo-0.3.6-1.el8.aarch64.rpm pipewire0.2-debugsource-0.2.7-6.el8.aarch64.rpm pipewire0.2-devel-0.2.7-6.el8.aarch64.rpm pipewire0.2-libs-0.2.7-6.el8.aarch64.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.aarch64.rpm potrace-1.15-3.el8.aarch64.rpm potrace-debuginfo-1.15-3.el8.aarch64.rpm potrace-debugsource-1.15-3.el8.aarch64.rpm pygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm pygobject3-debugsource-3.28.3-2.el8.aarch64.rpm python3-gobject-3.28.3-2.el8.aarch64.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm python3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm tracker-2.1.5-2.el8.aarch64.rpm tracker-debuginfo-2.1.5-2.el8.aarch64.rpm tracker-debugsource-2.1.5-2.el8.aarch64.rpm vte-profile-0.52.4-2.el8.aarch64.rpm vte291-0.52.4-2.el8.aarch64.rpm vte291-debuginfo-0.52.4-2.el8.aarch64.rpm vte291-debugsource-0.52.4-2.el8.aarch64.rpm vte291-devel-debuginfo-0.52.4-2.el8.aarch64.rpm webkit2gtk3-2.28.4-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.28.4-1.el8.aarch64.rpm webkit2gtk3-devel-2.28.4-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.aarch64.rpm webrtc-audio-processing-0.3-9.el8.aarch64.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.aarch64.rpm webrtc-audio-processing-debugsource-0.3-9.el8.aarch64.rpm xdg-desktop-portal-1.6.0-2.el8.aarch64.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.aarch64.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.aarch64.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.aarch64.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.aarch64.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.aarch64.rpm

noarch: gnome-classic-session-3.32.1-11.el8.noarch.rpm gnome-control-center-filesystem-3.28.2-22.el8.noarch.rpm gnome-shell-extension-apps-menu-3.32.1-11.el8.noarch.rpm gnome-shell-extension-auto-move-windows-3.32.1-11.el8.noarch.rpm gnome-shell-extension-common-3.32.1-11.el8.noarch.rpm gnome-shell-extension-dash-to-dock-3.32.1-11.el8.noarch.rpm gnome-shell-extension-desktop-icons-3.32.1-11.el8.noarch.rpm gnome-shell-extension-disable-screenshield-3.32.1-11.el8.noarch.rpm gnome-shell-extension-drive-menu-3.32.1-11.el8.noarch.rpm gnome-shell-extension-horizontal-workspaces-3.32.1-11.el8.noarch.rpm gnome-shell-extension-launch-new-instance-3.32.1-11.el8.noarch.rpm gnome-shell-extension-native-window-placement-3.32.1-11.el8.noarch.rpm gnome-shell-extension-no-hot-corner-3.32.1-11.el8.noarch.rpm gnome-shell-extension-panel-favorites-3.32.1-11.el8.noarch.rpm gnome-shell-extension-places-menu-3.32.1-11.el8.noarch.rpm gnome-shell-extension-screenshot-window-sizer-3.32.1-11.el8.noarch.rpm gnome-shell-extension-systemMonitor-3.32.1-11.el8.noarch.rpm gnome-shell-extension-top-icons-3.32.1-11.el8.noarch.rpm gnome-shell-extension-updates-dialog-3.32.1-11.el8.noarch.rpm gnome-shell-extension-user-theme-3.32.1-11.el8.noarch.rpm gnome-shell-extension-window-grouper-3.32.1-11.el8.noarch.rpm gnome-shell-extension-window-list-3.32.1-11.el8.noarch.rpm gnome-shell-extension-windowsNavigator-3.32.1-11.el8.noarch.rpm gnome-shell-extension-workspace-indicator-3.32.1-11.el8.noarch.rpm

ppc64le: LibRaw-0.19.5-2.el8.ppc64le.rpm LibRaw-debuginfo-0.19.5-2.el8.ppc64le.rpm LibRaw-debugsource-0.19.5-2.el8.ppc64le.rpm LibRaw-samples-debuginfo-0.19.5-2.el8.ppc64le.rpm PackageKit-1.1.12-6.el8.ppc64le.rpm PackageKit-command-not-found-1.1.12-6.el8.ppc64le.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-cron-1.1.12-6.el8.ppc64le.rpm PackageKit-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-debugsource-1.1.12-6.el8.ppc64le.rpm PackageKit-glib-1.1.12-6.el8.ppc64le.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.ppc64le.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.ppc64le.rpm PackageKit-gtk3-module-1.1.12-6.el8.ppc64le.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.ppc64le.rpm dleyna-renderer-0.6.0-3.el8.ppc64le.rpm dleyna-renderer-debuginfo-0.6.0-3.el8.ppc64le.rpm dleyna-renderer-debugsource-0.6.0-3.el8.ppc64le.rpm frei0r-plugins-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-debugsource-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-opencv-1.6.1-7.el8.ppc64le.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.ppc64le.rpm gdm-3.28.3-34.el8.ppc64le.rpm gdm-debuginfo-3.28.3-34.el8.ppc64le.rpm gdm-debugsource-3.28.3-34.el8.ppc64le.rpm gnome-control-center-3.28.2-22.el8.ppc64le.rpm gnome-control-center-debuginfo-3.28.2-22.el8.ppc64le.rpm gnome-control-center-debugsource-3.28.2-22.el8.ppc64le.rpm gnome-photos-3.28.1-3.el8.ppc64le.rpm gnome-photos-debuginfo-3.28.1-3.el8.ppc64le.rpm gnome-photos-debugsource-3.28.1-3.el8.ppc64le.rpm gnome-photos-tests-3.28.1-3.el8.ppc64le.rpm gnome-remote-desktop-0.1.8-3.el8.ppc64le.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.ppc64le.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.ppc64le.rpm gnome-session-3.28.1-10.el8.ppc64le.rpm gnome-session-debuginfo-3.28.1-10.el8.ppc64le.rpm gnome-session-debugsource-3.28.1-10.el8.ppc64le.rpm gnome-session-wayland-session-3.28.1-10.el8.ppc64le.rpm gnome-session-xsession-3.28.1-10.el8.ppc64le.rpm gnome-settings-daemon-3.32.0-11.el8.ppc64le.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.ppc64le.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.ppc64le.rpm gnome-shell-3.32.2-20.el8.ppc64le.rpm gnome-shell-debuginfo-3.32.2-20.el8.ppc64le.rpm gnome-shell-debugsource-3.32.2-20.el8.ppc64le.rpm gnome-terminal-3.28.3-2.el8.ppc64le.rpm gnome-terminal-debuginfo-3.28.3-2.el8.ppc64le.rpm gnome-terminal-debugsource-3.28.3-2.el8.ppc64le.rpm gnome-terminal-nautilus-3.28.3-2.el8.ppc64le.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.ppc64le.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.ppc64le.rpm gtk-update-icon-cache-3.22.30-6.el8.ppc64le.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-3.22.30-6.el8.ppc64le.rpm gtk3-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-debugsource-3.22.30-6.el8.ppc64le.rpm gtk3-devel-3.22.30-6.el8.ppc64le.rpm gtk3-devel-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-immodule-xim-3.22.30-6.el8.ppc64le.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.ppc64le.rpm gtk3-tests-debuginfo-3.22.30-6.el8.ppc64le.rpm gvfs-1.36.2-10.el8.ppc64le.rpm gvfs-afc-1.36.2-10.el8.ppc64le.rpm gvfs-afc-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-afp-1.36.2-10.el8.ppc64le.rpm gvfs-afp-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-archive-1.36.2-10.el8.ppc64le.rpm gvfs-archive-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-client-1.36.2-10.el8.ppc64le.rpm gvfs-client-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-debugsource-1.36.2-10.el8.ppc64le.rpm gvfs-devel-1.36.2-10.el8.ppc64le.rpm gvfs-fuse-1.36.2-10.el8.ppc64le.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-goa-1.36.2-10.el8.ppc64le.rpm gvfs-goa-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-gphoto2-1.36.2-10.el8.ppc64le.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-mtp-1.36.2-10.el8.ppc64le.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.ppc64le.rpm gvfs-smb-1.36.2-10.el8.ppc64le.rpm gvfs-smb-debuginfo-1.36.2-10.el8.ppc64le.rpm libsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm libsoup-debugsource-2.62.3-2.el8.ppc64le.rpm libsoup-devel-2.62.3-2.el8.ppc64le.rpm mutter-3.32.2-48.el8.ppc64le.rpm mutter-debuginfo-3.32.2-48.el8.ppc64le.rpm mutter-debugsource-3.32.2-48.el8.ppc64le.rpm mutter-tests-debuginfo-3.32.2-48.el8.ppc64le.rpm nautilus-3.28.1-14.el8.ppc64le.rpm nautilus-debuginfo-3.28.1-14.el8.ppc64le.rpm nautilus-debugsource-3.28.1-14.el8.ppc64le.rpm nautilus-extensions-3.28.1-14.el8.ppc64le.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.ppc64le.rpm pipewire-0.3.6-1.el8.ppc64le.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-debugsource-0.3.6-1.el8.ppc64le.rpm pipewire-devel-0.3.6-1.el8.ppc64le.rpm pipewire-doc-0.3.6-1.el8.ppc64le.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-libs-0.3.6-1.el8.ppc64le.rpm pipewire-libs-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire-utils-0.3.6-1.el8.ppc64le.rpm pipewire-utils-debuginfo-0.3.6-1.el8.ppc64le.rpm pipewire0.2-debugsource-0.2.7-6.el8.ppc64le.rpm pipewire0.2-devel-0.2.7-6.el8.ppc64le.rpm pipewire0.2-libs-0.2.7-6.el8.ppc64le.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.ppc64le.rpm potrace-1.15-3.el8.ppc64le.rpm potrace-debuginfo-1.15-3.el8.ppc64le.rpm potrace-debugsource-1.15-3.el8.ppc64le.rpm pygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm pygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm python3-gobject-3.28.3-2.el8.ppc64le.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm python3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm tracker-2.1.5-2.el8.ppc64le.rpm tracker-debuginfo-2.1.5-2.el8.ppc64le.rpm tracker-debugsource-2.1.5-2.el8.ppc64le.rpm vte-profile-0.52.4-2.el8.ppc64le.rpm vte291-0.52.4-2.el8.ppc64le.rpm vte291-debuginfo-0.52.4-2.el8.ppc64le.rpm vte291-debugsource-0.52.4-2.el8.ppc64le.rpm vte291-devel-debuginfo-0.52.4-2.el8.ppc64le.rpm webkit2gtk3-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-devel-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm webrtc-audio-processing-0.3-9.el8.ppc64le.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.ppc64le.rpm webrtc-audio-processing-debugsource-0.3-9.el8.ppc64le.rpm xdg-desktop-portal-1.6.0-2.el8.ppc64le.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.ppc64le.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.ppc64le.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.ppc64le.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.ppc64le.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.ppc64le.rpm

s390x: PackageKit-1.1.12-6.el8.s390x.rpm PackageKit-command-not-found-1.1.12-6.el8.s390x.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-cron-1.1.12-6.el8.s390x.rpm PackageKit-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-debugsource-1.1.12-6.el8.s390x.rpm PackageKit-glib-1.1.12-6.el8.s390x.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.s390x.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.s390x.rpm PackageKit-gtk3-module-1.1.12-6.el8.s390x.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.s390x.rpm frei0r-plugins-1.6.1-7.el8.s390x.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.s390x.rpm frei0r-plugins-debugsource-1.6.1-7.el8.s390x.rpm frei0r-plugins-opencv-1.6.1-7.el8.s390x.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.s390x.rpm gdm-3.28.3-34.el8.s390x.rpm gdm-debuginfo-3.28.3-34.el8.s390x.rpm gdm-debugsource-3.28.3-34.el8.s390x.rpm gnome-control-center-3.28.2-22.el8.s390x.rpm gnome-control-center-debuginfo-3.28.2-22.el8.s390x.rpm gnome-control-center-debugsource-3.28.2-22.el8.s390x.rpm gnome-remote-desktop-0.1.8-3.el8.s390x.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.s390x.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.s390x.rpm gnome-session-3.28.1-10.el8.s390x.rpm gnome-session-debuginfo-3.28.1-10.el8.s390x.rpm gnome-session-debugsource-3.28.1-10.el8.s390x.rpm gnome-session-wayland-session-3.28.1-10.el8.s390x.rpm gnome-session-xsession-3.28.1-10.el8.s390x.rpm gnome-settings-daemon-3.32.0-11.el8.s390x.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.s390x.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.s390x.rpm gnome-shell-3.32.2-20.el8.s390x.rpm gnome-shell-debuginfo-3.32.2-20.el8.s390x.rpm gnome-shell-debugsource-3.32.2-20.el8.s390x.rpm gnome-terminal-3.28.3-2.el8.s390x.rpm gnome-terminal-debuginfo-3.28.3-2.el8.s390x.rpm gnome-terminal-debugsource-3.28.3-2.el8.s390x.rpm gnome-terminal-nautilus-3.28.3-2.el8.s390x.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.s390x.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.s390x.rpm gtk-update-icon-cache-3.22.30-6.el8.s390x.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-3.22.30-6.el8.s390x.rpm gtk3-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-debugsource-3.22.30-6.el8.s390x.rpm gtk3-devel-3.22.30-6.el8.s390x.rpm gtk3-devel-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-immodule-xim-3.22.30-6.el8.s390x.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.s390x.rpm gtk3-tests-debuginfo-3.22.30-6.el8.s390x.rpm gvfs-1.36.2-10.el8.s390x.rpm gvfs-afp-1.36.2-10.el8.s390x.rpm gvfs-afp-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-archive-1.36.2-10.el8.s390x.rpm gvfs-archive-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-client-1.36.2-10.el8.s390x.rpm gvfs-client-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-debugsource-1.36.2-10.el8.s390x.rpm gvfs-devel-1.36.2-10.el8.s390x.rpm gvfs-fuse-1.36.2-10.el8.s390x.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-goa-1.36.2-10.el8.s390x.rpm gvfs-goa-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-gphoto2-1.36.2-10.el8.s390x.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-mtp-1.36.2-10.el8.s390x.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.s390x.rpm gvfs-smb-1.36.2-10.el8.s390x.rpm gvfs-smb-debuginfo-1.36.2-10.el8.s390x.rpm libsoup-debuginfo-2.62.3-2.el8.s390x.rpm libsoup-debugsource-2.62.3-2.el8.s390x.rpm libsoup-devel-2.62.3-2.el8.s390x.rpm mutter-3.32.2-48.el8.s390x.rpm mutter-debuginfo-3.32.2-48.el8.s390x.rpm mutter-debugsource-3.32.2-48.el8.s390x.rpm mutter-tests-debuginfo-3.32.2-48.el8.s390x.rpm nautilus-3.28.1-14.el8.s390x.rpm nautilus-debuginfo-3.28.1-14.el8.s390x.rpm nautilus-debugsource-3.28.1-14.el8.s390x.rpm nautilus-extensions-3.28.1-14.el8.s390x.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.s390x.rpm pipewire-0.3.6-1.el8.s390x.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-debugsource-0.3.6-1.el8.s390x.rpm pipewire-devel-0.3.6-1.el8.s390x.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-libs-0.3.6-1.el8.s390x.rpm pipewire-libs-debuginfo-0.3.6-1.el8.s390x.rpm pipewire-utils-0.3.6-1.el8.s390x.rpm pipewire-utils-debuginfo-0.3.6-1.el8.s390x.rpm pipewire0.2-debugsource-0.2.7-6.el8.s390x.rpm pipewire0.2-devel-0.2.7-6.el8.s390x.rpm pipewire0.2-libs-0.2.7-6.el8.s390x.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.s390x.rpm potrace-1.15-3.el8.s390x.rpm potrace-debuginfo-1.15-3.el8.s390x.rpm potrace-debugsource-1.15-3.el8.s390x.rpm pygobject3-debuginfo-3.28.3-2.el8.s390x.rpm pygobject3-debugsource-3.28.3-2.el8.s390x.rpm python3-gobject-3.28.3-2.el8.s390x.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm python3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm tracker-2.1.5-2.el8.s390x.rpm tracker-debuginfo-2.1.5-2.el8.s390x.rpm tracker-debugsource-2.1.5-2.el8.s390x.rpm vte-profile-0.52.4-2.el8.s390x.rpm vte291-0.52.4-2.el8.s390x.rpm vte291-debuginfo-0.52.4-2.el8.s390x.rpm vte291-debugsource-0.52.4-2.el8.s390x.rpm vte291-devel-debuginfo-0.52.4-2.el8.s390x.rpm webkit2gtk3-2.28.4-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.s390x.rpm webkit2gtk3-debugsource-2.28.4-1.el8.s390x.rpm webkit2gtk3-devel-2.28.4-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.s390x.rpm webrtc-audio-processing-0.3-9.el8.s390x.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.s390x.rpm webrtc-audio-processing-debugsource-0.3-9.el8.s390x.rpm xdg-desktop-portal-1.6.0-2.el8.s390x.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.s390x.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.s390x.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.s390x.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.s390x.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.s390x.rpm

x86_64: LibRaw-0.19.5-2.el8.i686.rpm LibRaw-0.19.5-2.el8.x86_64.rpm LibRaw-debuginfo-0.19.5-2.el8.i686.rpm LibRaw-debuginfo-0.19.5-2.el8.x86_64.rpm LibRaw-debugsource-0.19.5-2.el8.i686.rpm LibRaw-debugsource-0.19.5-2.el8.x86_64.rpm LibRaw-samples-debuginfo-0.19.5-2.el8.i686.rpm LibRaw-samples-debuginfo-0.19.5-2.el8.x86_64.rpm PackageKit-1.1.12-6.el8.x86_64.rpm PackageKit-command-not-found-1.1.12-6.el8.x86_64.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-command-not-found-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-cron-1.1.12-6.el8.x86_64.rpm PackageKit-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-debugsource-1.1.12-6.el8.i686.rpm PackageKit-debugsource-1.1.12-6.el8.x86_64.rpm PackageKit-glib-1.1.12-6.el8.i686.rpm PackageKit-glib-1.1.12-6.el8.x86_64.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-glib-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-gstreamer-plugin-1.1.12-6.el8.x86_64.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.x86_64.rpm PackageKit-gtk3-module-1.1.12-6.el8.i686.rpm PackageKit-gtk3-module-1.1.12-6.el8.x86_64.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.i686.rpm PackageKit-gtk3-module-debuginfo-1.1.12-6.el8.x86_64.rpm dleyna-renderer-0.6.0-3.el8.x86_64.rpm dleyna-renderer-debuginfo-0.6.0-3.el8.x86_64.rpm dleyna-renderer-debugsource-0.6.0-3.el8.x86_64.rpm frei0r-plugins-1.6.1-7.el8.x86_64.rpm frei0r-plugins-debuginfo-1.6.1-7.el8.x86_64.rpm frei0r-plugins-debugsource-1.6.1-7.el8.x86_64.rpm frei0r-plugins-opencv-1.6.1-7.el8.x86_64.rpm frei0r-plugins-opencv-debuginfo-1.6.1-7.el8.x86_64.rpm gdm-3.28.3-34.el8.i686.rpm gdm-3.28.3-34.el8.x86_64.rpm gdm-debuginfo-3.28.3-34.el8.i686.rpm gdm-debuginfo-3.28.3-34.el8.x86_64.rpm gdm-debugsource-3.28.3-34.el8.i686.rpm gdm-debugsource-3.28.3-34.el8.x86_64.rpm gnome-control-center-3.28.2-22.el8.x86_64.rpm gnome-control-center-debuginfo-3.28.2-22.el8.x86_64.rpm gnome-control-center-debugsource-3.28.2-22.el8.x86_64.rpm gnome-photos-3.28.1-3.el8.x86_64.rpm gnome-photos-debuginfo-3.28.1-3.el8.x86_64.rpm gnome-photos-debugsource-3.28.1-3.el8.x86_64.rpm gnome-photos-tests-3.28.1-3.el8.x86_64.rpm gnome-remote-desktop-0.1.8-3.el8.x86_64.rpm gnome-remote-desktop-debuginfo-0.1.8-3.el8.x86_64.rpm gnome-remote-desktop-debugsource-0.1.8-3.el8.x86_64.rpm gnome-session-3.28.1-10.el8.x86_64.rpm gnome-session-debuginfo-3.28.1-10.el8.x86_64.rpm gnome-session-debugsource-3.28.1-10.el8.x86_64.rpm gnome-session-wayland-session-3.28.1-10.el8.x86_64.rpm gnome-session-xsession-3.28.1-10.el8.x86_64.rpm gnome-settings-daemon-3.32.0-11.el8.x86_64.rpm gnome-settings-daemon-debuginfo-3.32.0-11.el8.x86_64.rpm gnome-settings-daemon-debugsource-3.32.0-11.el8.x86_64.rpm gnome-shell-3.32.2-20.el8.x86_64.rpm gnome-shell-debuginfo-3.32.2-20.el8.x86_64.rpm gnome-shell-debugsource-3.32.2-20.el8.x86_64.rpm gnome-terminal-3.28.3-2.el8.x86_64.rpm gnome-terminal-debuginfo-3.28.3-2.el8.x86_64.rpm gnome-terminal-debugsource-3.28.3-2.el8.x86_64.rpm gnome-terminal-nautilus-3.28.3-2.el8.x86_64.rpm gnome-terminal-nautilus-debuginfo-3.28.3-2.el8.x86_64.rpm gsettings-desktop-schemas-3.32.0-5.el8.i686.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.i686.rpm gsettings-desktop-schemas-devel-3.32.0-5.el8.x86_64.rpm gtk-update-icon-cache-3.22.30-6.el8.x86_64.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.i686.rpm gtk-update-icon-cache-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-3.22.30-6.el8.i686.rpm gtk3-3.22.30-6.el8.x86_64.rpm gtk3-debuginfo-3.22.30-6.el8.i686.rpm gtk3-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-debugsource-3.22.30-6.el8.i686.rpm gtk3-debugsource-3.22.30-6.el8.x86_64.rpm gtk3-devel-3.22.30-6.el8.i686.rpm gtk3-devel-3.22.30-6.el8.x86_64.rpm gtk3-devel-debuginfo-3.22.30-6.el8.i686.rpm gtk3-devel-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-immodule-xim-3.22.30-6.el8.x86_64.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.i686.rpm gtk3-immodule-xim-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.i686.rpm gtk3-immodules-debuginfo-3.22.30-6.el8.x86_64.rpm gtk3-tests-debuginfo-3.22.30-6.el8.i686.rpm gtk3-tests-debuginfo-3.22.30-6.el8.x86_64.rpm gvfs-1.36.2-10.el8.x86_64.rpm gvfs-afc-1.36.2-10.el8.x86_64.rpm gvfs-afc-debuginfo-1.36.2-10.el8.i686.rpm gvfs-afc-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-afp-1.36.2-10.el8.x86_64.rpm gvfs-afp-debuginfo-1.36.2-10.el8.i686.rpm gvfs-afp-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-archive-1.36.2-10.el8.x86_64.rpm gvfs-archive-debuginfo-1.36.2-10.el8.i686.rpm gvfs-archive-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-client-1.36.2-10.el8.i686.rpm gvfs-client-1.36.2-10.el8.x86_64.rpm gvfs-client-debuginfo-1.36.2-10.el8.i686.rpm gvfs-client-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-debuginfo-1.36.2-10.el8.i686.rpm gvfs-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-debugsource-1.36.2-10.el8.i686.rpm gvfs-debugsource-1.36.2-10.el8.x86_64.rpm gvfs-devel-1.36.2-10.el8.i686.rpm gvfs-devel-1.36.2-10.el8.x86_64.rpm gvfs-fuse-1.36.2-10.el8.x86_64.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.i686.rpm gvfs-fuse-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-goa-1.36.2-10.el8.x86_64.rpm gvfs-goa-debuginfo-1.36.2-10.el8.i686.rpm gvfs-goa-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-gphoto2-1.36.2-10.el8.x86_64.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.i686.rpm gvfs-gphoto2-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-mtp-1.36.2-10.el8.x86_64.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.i686.rpm gvfs-mtp-debuginfo-1.36.2-10.el8.x86_64.rpm gvfs-smb-1.36.2-10.el8.x86_64.rpm gvfs-smb-debuginfo-1.36.2-10.el8.i686.rpm gvfs-smb-debuginfo-1.36.2-10.el8.x86_64.rpm libsoup-debuginfo-2.62.3-2.el8.i686.rpm libsoup-debuginfo-2.62.3-2.el8.x86_64.rpm libsoup-debugsource-2.62.3-2.el8.i686.rpm libsoup-debugsource-2.62.3-2.el8.x86_64.rpm libsoup-devel-2.62.3-2.el8.i686.rpm libsoup-devel-2.62.3-2.el8.x86_64.rpm mutter-3.32.2-48.el8.i686.rpm mutter-3.32.2-48.el8.x86_64.rpm mutter-debuginfo-3.32.2-48.el8.i686.rpm mutter-debuginfo-3.32.2-48.el8.x86_64.rpm mutter-debugsource-3.32.2-48.el8.i686.rpm mutter-debugsource-3.32.2-48.el8.x86_64.rpm mutter-tests-debuginfo-3.32.2-48.el8.i686.rpm mutter-tests-debuginfo-3.32.2-48.el8.x86_64.rpm nautilus-3.28.1-14.el8.x86_64.rpm nautilus-debuginfo-3.28.1-14.el8.i686.rpm nautilus-debuginfo-3.28.1-14.el8.x86_64.rpm nautilus-debugsource-3.28.1-14.el8.i686.rpm nautilus-debugsource-3.28.1-14.el8.x86_64.rpm nautilus-extensions-3.28.1-14.el8.i686.rpm nautilus-extensions-3.28.1-14.el8.x86_64.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.i686.rpm nautilus-extensions-debuginfo-3.28.1-14.el8.x86_64.rpm pipewire-0.3.6-1.el8.i686.rpm pipewire-0.3.6-1.el8.x86_64.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.i686.rpm pipewire-alsa-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-debuginfo-0.3.6-1.el8.i686.rpm pipewire-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-debugsource-0.3.6-1.el8.i686.rpm pipewire-debugsource-0.3.6-1.el8.x86_64.rpm pipewire-devel-0.3.6-1.el8.i686.rpm pipewire-devel-0.3.6-1.el8.x86_64.rpm pipewire-doc-0.3.6-1.el8.x86_64.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.i686.rpm pipewire-gstreamer-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-libs-0.3.6-1.el8.i686.rpm pipewire-libs-0.3.6-1.el8.x86_64.rpm pipewire-libs-debuginfo-0.3.6-1.el8.i686.rpm pipewire-libs-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire-utils-0.3.6-1.el8.x86_64.rpm pipewire-utils-debuginfo-0.3.6-1.el8.i686.rpm pipewire-utils-debuginfo-0.3.6-1.el8.x86_64.rpm pipewire0.2-debugsource-0.2.7-6.el8.i686.rpm pipewire0.2-debugsource-0.2.7-6.el8.x86_64.rpm pipewire0.2-devel-0.2.7-6.el8.i686.rpm pipewire0.2-devel-0.2.7-6.el8.x86_64.rpm pipewire0.2-libs-0.2.7-6.el8.i686.rpm pipewire0.2-libs-0.2.7-6.el8.x86_64.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.i686.rpm pipewire0.2-libs-debuginfo-0.2.7-6.el8.x86_64.rpm potrace-1.15-3.el8.i686.rpm potrace-1.15-3.el8.x86_64.rpm potrace-debuginfo-1.15-3.el8.i686.rpm potrace-debuginfo-1.15-3.el8.x86_64.rpm potrace-debugsource-1.15-3.el8.i686.rpm potrace-debugsource-1.15-3.el8.x86_64.rpm pygobject3-debuginfo-3.28.3-2.el8.i686.rpm pygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm pygobject3-debugsource-3.28.3-2.el8.i686.rpm pygobject3-debugsource-3.28.3-2.el8.x86_64.rpm python3-gobject-3.28.3-2.el8.i686.rpm python3-gobject-3.28.3-2.el8.x86_64.rpm python3-gobject-base-3.28.3-2.el8.i686.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.i686.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm python3-gobject-debuginfo-3.28.3-2.el8.i686.rpm python3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm tracker-2.1.5-2.el8.i686.rpm tracker-2.1.5-2.el8.x86_64.rpm tracker-debuginfo-2.1.5-2.el8.i686.rpm tracker-debuginfo-2.1.5-2.el8.x86_64.rpm tracker-debugsource-2.1.5-2.el8.i686.rpm tracker-debugsource-2.1.5-2.el8.x86_64.rpm vte-profile-0.52.4-2.el8.x86_64.rpm vte291-0.52.4-2.el8.i686.rpm vte291-0.52.4-2.el8.x86_64.rpm vte291-debuginfo-0.52.4-2.el8.i686.rpm vte291-debuginfo-0.52.4-2.el8.x86_64.rpm vte291-debugsource-0.52.4-2.el8.i686.rpm vte291-debugsource-0.52.4-2.el8.x86_64.rpm vte291-devel-debuginfo-0.52.4-2.el8.i686.rpm vte291-devel-debuginfo-0.52.4-2.el8.x86_64.rpm webkit2gtk3-2.28.4-1.el8.i686.rpm webkit2gtk3-2.28.4-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-debuginfo-2.28.4-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.28.4-1.el8.i686.rpm webkit2gtk3-debugsource-2.28.4-1.el8.x86_64.rpm webkit2gtk3-devel-2.28.4-1.el8.i686.rpm webkit2gtk3-devel-2.28.4-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.28.4-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.x86_64.rpm webrtc-audio-processing-0.3-9.el8.i686.rpm webrtc-audio-processing-0.3-9.el8.x86_64.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.i686.rpm webrtc-audio-processing-debuginfo-0.3-9.el8.x86_64.rpm webrtc-audio-processing-debugsource-0.3-9.el8.i686.rpm webrtc-audio-processing-debugsource-0.3-9.el8.x86_64.rpm xdg-desktop-portal-1.6.0-2.el8.x86_64.rpm xdg-desktop-portal-debuginfo-1.6.0-2.el8.x86_64.rpm xdg-desktop-portal-debugsource-1.6.0-2.el8.x86_64.rpm xdg-desktop-portal-gtk-1.6.0-1.el8.x86_64.rpm xdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.x86_64.rpm xdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.x86_64.rpm

Red Hat Enterprise Linux BaseOS (v. 8):

Source: gsettings-desktop-schemas-3.32.0-5.el8.src.rpm libsoup-2.62.3-2.el8.src.rpm pygobject3-3.28.3-2.el8.src.rpm

aarch64: gsettings-desktop-schemas-3.32.0-5.el8.aarch64.rpm libsoup-2.62.3-2.el8.aarch64.rpm libsoup-debuginfo-2.62.3-2.el8.aarch64.rpm libsoup-debugsource-2.62.3-2.el8.aarch64.rpm pygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm pygobject3-debugsource-3.28.3-2.el8.aarch64.rpm python3-gobject-base-3.28.3-2.el8.aarch64.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm python3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm

ppc64le: gsettings-desktop-schemas-3.32.0-5.el8.ppc64le.rpm libsoup-2.62.3-2.el8.ppc64le.rpm libsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm libsoup-debugsource-2.62.3-2.el8.ppc64le.rpm pygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm pygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm python3-gobject-base-3.28.3-2.el8.ppc64le.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm python3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm

s390x: gsettings-desktop-schemas-3.32.0-5.el8.s390x.rpm libsoup-2.62.3-2.el8.s390x.rpm libsoup-debuginfo-2.62.3-2.el8.s390x.rpm libsoup-debugsource-2.62.3-2.el8.s390x.rpm pygobject3-debuginfo-3.28.3-2.el8.s390x.rpm pygobject3-debugsource-3.28.3-2.el8.s390x.rpm python3-gobject-base-3.28.3-2.el8.s390x.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm python3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm

x86_64: gsettings-desktop-schemas-3.32.0-5.el8.x86_64.rpm libsoup-2.62.3-2.el8.i686.rpm libsoup-2.62.3-2.el8.x86_64.rpm libsoup-debuginfo-2.62.3-2.el8.i686.rpm libsoup-debuginfo-2.62.3-2.el8.x86_64.rpm libsoup-debugsource-2.62.3-2.el8.i686.rpm libsoup-debugsource-2.62.3-2.el8.x86_64.rpm pygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm pygobject3-debugsource-3.28.3-2.el8.x86_64.rpm python3-gobject-base-3.28.3-2.el8.x86_64.rpm python3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm python3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm

Red Hat CodeReady Linux Builder (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

7

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202010-1296",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.8"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.20"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.4.8"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.0"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "6.2.8"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.3"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.6"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.6"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.2 \u672a\u6e80 (macos high sierra)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "7.20 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (iphone 6s \u4ee5\u964d)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (ipad mini 4 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.2 \u672a\u6e80 (macos mojave)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "11.3 \u672a\u6e80 (microsoft store \u304b\u3089\u5165\u624b\u3057\u305f windows 10 \u4ee5\u964d)"
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.2.8 \u672a\u6e80 (apple watch series 1 \u4ee5\u964d)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (ipod touch \u7b2c 7 \u4e16\u4ee3)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4.8 \u672a\u6e80 (apple tv hd)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.6 \u672a\u6e80 (ipad air 2 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4.8 \u672a\u6e80 (apple tv 4k)"
      },
      {
        "model": "itunes",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 12.10.8 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1.2 \u672a\u6e80 (macos catalina)"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:watchos",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Gentoo",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "158688"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2020-9895",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-9895",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "HIGH",
            "trust": 1.1,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "Partial",
            "baseScore": 7.5,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-009903",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "High",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "VHN-188020",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "HIGH",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-9895",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.8,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-009903",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-9895",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-009903",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202007-1146",
            "trust": 0.6,
            "value": "CRITICAL"
          },
          {
            "author": "VULHUB",
            "id": "VHN-188020",
            "trust": 0.1,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-9895",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A use after free issue was addressed with improved memory management. This issue is fixed in iOS 13.6 and iPadOS 13.6, tvOS 13.4.8, watchOS 6.2.8, Safari 13.1.2, iTunes 12.10.8 for Windows, iCloud for Windows 11.3, iCloud for Windows 7.20. A remote attacker may be able to cause unexpected application termination or arbitrary code execution. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple iOS prior to 13.6; iPadOS prior to 13.6; tvOS prior to 13.4.8; watchOS prior to 6.2.8; Safari prior to 13.1.2; Windows-based iTunes prior to 12.10.8. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 2.28.4-1~deb10u1. \n\nWe recommend that you upgrade your webkit2gtk packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202007-61\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebKitGTK+: Multiple vulnerabilities\n     Date: July 31, 2020\n     Bugs: #734584\n       ID: 202007-61\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nBackground\n=========\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.28.4                  \u003e= 2.28.4\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebKitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.28.4\"\n\nReferences\n=========\n[ 1 ] CVE-2020-9862\n      https://nvd.nist.gov/vuln/detail/CVE-2020-9862\n[ 2 ] CVE-2020-9893\n      https://nvd.nist.gov/vuln/detail/CVE-2020-9893\n[ 3 ] CVE-2020-9894\n      https://nvd.nist.gov/vuln/detail/CVE-2020-9894\n[ 4 ] CVE-2020-9895\n      https://nvd.nist.gov/vuln/detail/CVE-2020-9895\n[ 5 ] CVE-2020-9915\n      https://nvd.nist.gov/vuln/detail/CVE-2020-9915\n[ 6 ] CVE-2020-9925\n      https://nvd.nist.gov/vuln/detail/CVE-2020-9925\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202007-61\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID:       RHSA-2022:0056-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0056\nIssue date:        2022-03-10\nCVE Names:         CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n                   CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n                   CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n                   CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n                   CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n                   CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n                   CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n                   CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n                   CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n                   CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n                   CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n                   CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n                   CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n                   CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n                   CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n                   CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n                   CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n                   CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n                   CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n                   CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n                   CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n                   CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n                   CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027  is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027  is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\"  in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027  is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi  after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal]  The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs -  fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log  of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI  OSP16 OVN  IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027  ENV  which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI  start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation]  add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws  bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer:  ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on  self._get_vip_port(loadbalancer).id   \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10]  sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack  (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure -  Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image  Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list  and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the  deleted EgressIP object got  InvalidEgressIP  warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD:  prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) -  no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server  makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN  Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain  olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking -  networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR  is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to  attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2020-07-15-3 tvOS 13.4.8\n\ntvOS 13.4.8 is now available and addresses the following:\n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-9889: JunDong Xie and XingWei Li of Ant-financial Light-Year\nSecurity Lab\n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-9888: JunDong Xie and XingWei Li of Ant-financial Light-Year\nSecurity Lab\nCVE-2020-9890: JunDong Xie and XingWei Li of Ant-financial Light-Year\nSecurity Lab\nCVE-2020-9891: JunDong Xie and XingWei Li of Ant-financial Light-Year\nSecurity Lab\n\nAVEVideoEncoder\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed by removing the\nvulnerable code. \nCVE-2020-9907: an anonymous researcher\n\nCrash Reporter\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: A memory corruption issue was addressed by removing the\nvulnerable code. \nCVE-2020-9865: Zhuo Liang of Qihoo 360 Vulcan Team working with 360\nBugCloud\n\nGeoServices\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious application may be able to read sensitive\nlocation information\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2020-9933: Min (Spark) Zheng and Xiaolong Bai of Alibaba Inc. \n\niAP\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An attacker in a privileged network position may be able to\nexecute arbitrary code\nDescription: An input validation issue existed in Bluetooth. \nCVE-2020-9914: Andy Davis of NCC Group\n\nImageIO\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2020-9936: Mickey Jin of Trend Micro\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An attacker in a privileged network position may be able to\ninject into active connections within a VPN tunnel\nDescription: A routing issue was addressed with improved\nrestrictions. \nCVE-2019-14899: William J. Tolley, Beau Kujath, and Jedidiah R. \nCrandall\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An attacker that has already achieved kernel code execution\nmay be able to bypass kernel memory mitigations\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2020-9909: Brandon Azad of Google Project Zero\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-9894: 0011 working with Trend Micro Zero Day Initiative\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may prevent\nContent Security Policy from being enforced\nDescription: An access issue existed in Content Security Policy. \nCVE-2020-9915: an anonymous researcher\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2020-9893: 0011 working with Trend Micro Zero Day Initiative\nCVE-2020-9895: Wen Xu of SSLab, Georgia Tech\n\nWebKit\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious attacker with arbitrary read and write capability\nmay be able to bypass Pointer Authentication\nDescription: Multiple issues were addressed with improved logic. \nCVE-2020-9910: Samuel Gro\u00df of Google Project Zero\n\nWebKit Page Loading\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A malicious attacker may be able to conceal the destination\nof a URL\nDescription: A URL Unicode encoding issue was addressed with improved\nstate management. \nCVE-2020-9916: Rakesh Mane (@RakeshMane10)\n\nWebKit Web Inspector\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Copying a URL from Web Inspector may lead to command\ninjection\nDescription: A command injection issue existed in Web Inspector. \nCVE-2020-9862: Ophir Lojkine (@lovasoa)\n\nWi-Fi\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A remote attacker may be able to cause unexpected system\ntermination or corrupt kernel memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-9918: Jianjun Dai of 360 Alpha Lab working with 360 BugCloud\n(bugcloud.360.cn)\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Brandon Azad of Google Project Zero for\ntheir assistance. \n\nInstallation note:\n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting\n\"Settings -\u003e System -\u003e Software Update -\u003e Update Software.\"\n\nTo check the current version of software, select\n\"Settings -\u003e General -\u003e About.\"\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl8PhQUACgkQBz4uGe3y\n0M33Vw//fmvay18+s9sn8Gv2VfSgT2VcmDHMNTch9QoYbm7spSflAc8zWdToUOpK\nfiJAVEHB+adcGy3syi4z+utNf3l1XchVMuaxLKzDyS7LDiDIwczivrr642A+ahlk\nvrHXcdwQkf0Y3QdQF9DwcOfzyNvaRRJ2eICKlrjm4BrcoP63eoBTGKgcZp6EAOQu\nc0X5M2F2GcV4VwSmSuzKtsNlkjWlaD55meVWjGZGGUp4d0tk0BtmAWISAXf2NfFF\nWQyKQ9snXMzzF4SRA3cbWqFFluKDYyPx7Lh2jLB+KcrTRCtuMi+cAu3QQezRwIUD\nLnKzLbAbOO8Mu67aLjoBdW0IdCHbGpdK6I/aGi0eV029+tBdcn5UOfPIhGT9WDkQ\ntlDr5RCqWvc02F6e5SetIGRY1YGV6DWqo0U1h6cBdVgnx5g3aIZzXihATMV+4bxj\nVijf8iDG5LsO4Bx8g1aekrn37OQnr7WuFHLZrHKZyQejn6IdOQ2fyzH43/0mLiE3\neaoGwghlFXhOpbUx26owjEkDuC5GgboctjefqtJ9Zu7yfSS2GDAq23Qp9IXy/Avf\ncIIB0bnz9Mk+2qrZ2GDZXBePacLoVSNvaBywyrs6MMANrsi3Ioq3xug8b8WnTozL\nlMrdAVr64+qTn0YTc6QwNs9golbRQh3z2U6Hk/niQXlWZilaK/s=\n=+zqK\n-----END PGP SIGNATURE-----\n\n\n. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nGNOME is the default desktop environment of Red Hat Enterprise Linux. \n\nThe following packages have been upgraded to a later upstream version:\ngnome-remote-desktop (0.1.8), pipewire (0.3.6), vte291 (0.52.4),\nwebkit2gtk3 (2.28.4), xdg-desktop-portal (1.6.0), xdg-desktop-portal-gtk\n(1.6.0). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.3 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nGDM must be restarted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):\n\n1207179 - Select items matching non existing pattern does not unselect already selected\n1566027 - can\u0027t correctly compute contents size if hidden files are included\n1569868 - Browsing samba shares using gvfs is very slow\n1652178 - [RFE] perf-tool run on wayland\n1656262 - The terminal\u0027s character display is unclear on rhel8 guest after installing gnome\n1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled\n1692536 - login screen shows after gnome-initial-setup\n1706008 - Sound Effect sometimes fails to change to selected option. \n1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. \n1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined\n1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly\n1758891 - tracker-devel subpackage missing from el8 repos\n1775345 - Rebase xdg-desktop-portal to 1.6\n1778579 - Nautilus does not respect umask settings. \n1779691 - Rebase xdg-desktop-portal-gtk to 1.6\n1794045 - There are two different high contrast versions of desktop icons\n1804719 - Update vte291 to 0.52.4\n1805929 - RHEL 8.1 gnome-shell-extension errors\n1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp\n1814820 - No checkbox to install updates in the shutdown dialog\n1816070 - \"search for an application to open this file\" dialog broken\n1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution\n1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1817143 - Rebase WebKitGTK to 2.28\n1820759 - Include IO stall fixes\n1820760 - Include IO fixes\n1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening\n1827030 - gnome-settings-daemon: subscription notification on CentOS Stream\n1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content\n1832347 - [Rebase] Rebase pipewire to 0.3.x\n1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install\n1837381 - Backport screen cast improvements to 8.3\n1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version\n1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6\n1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113\n1840080 - Can not control top bar menus via keys in Wayland\n1840788 - [flatpak][rhel8] unable to build potrace as dependency\n1843486 - Software crash after clicking Updates tab\n1844578 - anaconda very rarely crashes at startup with a pygobject traceback\n1846191 - usb adapters hotplug crashes gnome-shell\n1847051 - JS ERROR: TypeError: area is null\n1847061 - File search doesn\u0027t work under certain locales\n1847062 - gnome-remote-desktop crash on QXL graphics\n1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV\n1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow\n1854734 - PipeWire 0.2 should be required by xdg-desktop-portal\n1866332 - Remove obsolete libusb-devel dependency\n1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at \"Started GNOME Display Manager\" - GDM regression issue. \n1872270 - WebKit renderer hangs on Cockpit\n1873093 - CVE-2020-14391 gnome-settings-daemon: Red Hat Customer Portal password logged and passed as command line argument when user registers through GNOME control center\n1873963 - Failed to start session: org.gnome.Mutter.ScreenCast API version 2 lower than minimum supported version 3\n1876462 - CVE-2020-3885 webkitgtk: Incorrect processing of file URLs\n1876463 - CVE-2020-3894 webkitgtk: Race condition allows reading of restricted memory\n1876465 - CVE-2020-3895 webkitgtk: Memory corruption triggered by a malicious web content\n1876468 - CVE-2020-3897 webkitgtk: Type confusion leading to arbitrary code execution\n1876470 - CVE-2020-3899 webkitgtk: Memory consumption issue leading to arbitrary code execution\n1876472 - CVE-2020-3900 webkitgtk: Memory corruption  triggered by a malicious web content\n1876473 - CVE-2020-3901 webkitgtk: Type confusion leading to arbitrary code execution\n1876476 - CVE-2020-3902 webkitgtk: Input validation issue leading to cross-site script attack\n1876516 - CVE-2020-3862 webkitgtk: Denial of service via incorrect memory handling\n1876518 - CVE-2020-3864 webkitgtk: Non-unique security origin for DOM object contexts\n1876521 - CVE-2020-3865 webkitgtk: Incorrect security check for a top-level DOM object context\n1876522 - CVE-2020-3867 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876523 - CVE-2020-3868 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876536 - CVE-2019-8710 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876537 - CVE-2019-8743 webkitgtk: Multiple memory corruption  issues leading to arbitrary code execution\n1876540 - CVE-2019-8764 webkitgtk: Incorrect state  management leading to universal cross-site scripting\n1876543 - CVE-2019-8766 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876545 - CVE-2019-8782 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876548 - CVE-2019-8783 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876549 - CVE-2019-8808 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876550 - CVE-2019-8811 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876552 - CVE-2019-8812 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876553 - CVE-2019-8813 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876554 - CVE-2019-8814 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876555 - CVE-2019-8815 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876556 - CVE-2019-8816 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876590 - CVE-2019-8819 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876591 - CVE-2019-8820 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876594 - CVE-2019-8823 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876607 - CVE-2019-8625 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876611 - CVE-2019-8720 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876617 - CVE-2019-8769 webkitgtk: Websites could reveal browsing history\n1876619 - CVE-2019-8771 webkitgtk: Violation of iframe sandboxing policy\n1877853 - File descriptors are being left behind on logout of RHEL 8 session\n1879532 - CVE-2020-9862 webkitgtk: Command injection in web inspector\n1879535 - CVE-2020-9893 webkitgtk: Use-after-free may lead to application termination or arbitrary code execution\n1879536 - CVE-2020-9894 webkitgtk: Out-of-bounds read may lead to unexpected application termination or arbitrary code execution\n1879538 - CVE-2020-9895 webkitgtk: Use-after-free may lead to application termination or arbitrary code execution\n1879540 - CVE-2020-9915 webkitgtk: Access issue in content security policy\n1879541 - CVE-2020-9925 webkitgtk: A logic issue may lead to cross site scripting\n1879545 - CVE-2020-9802 webkitgtk: Logic issue may lead to arbitrary code execution\n1879557 - CVE-2020-9803 webkitgtk: Memory corruption may lead to arbitrary code execution\n1879559 - CVE-2020-9805 webkitgtk: Logic issue may lead to cross site scripting\n1879563 - CVE-2020-9806 webkitgtk: Memory corruption may lead to arbitrary code execution\n1879564 - CVE-2020-9807 webkitgtk: Memory corruption may lead to arbitrary code execution\n1879566 - CVE-2020-9843 webkitgtk: Input validation issue may lead to cross site scripting\n1879568 - CVE-2020-9850 webkitgtk: Logic issue may lead to arbitrary code execution\n1880339 - Right GLX stereo texture is potentially leaked for each closed window\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nLibRaw-0.19.5-2.el8.src.rpm\nPackageKit-1.1.12-6.el8.src.rpm\ndleyna-renderer-0.6.0-3.el8.src.rpm\nfrei0r-plugins-1.6.1-7.el8.src.rpm\ngdm-3.28.3-34.el8.src.rpm\ngnome-control-center-3.28.2-22.el8.src.rpm\ngnome-photos-3.28.1-3.el8.src.rpm\ngnome-remote-desktop-0.1.8-3.el8.src.rpm\ngnome-session-3.28.1-10.el8.src.rpm\ngnome-settings-daemon-3.32.0-11.el8.src.rpm\ngnome-shell-3.32.2-20.el8.src.rpm\ngnome-shell-extensions-3.32.1-11.el8.src.rpm\ngnome-terminal-3.28.3-2.el8.src.rpm\ngtk3-3.22.30-6.el8.src.rpm\ngvfs-1.36.2-10.el8.src.rpm\nmutter-3.32.2-48.el8.src.rpm\nnautilus-3.28.1-14.el8.src.rpm\npipewire-0.3.6-1.el8.src.rpm\npipewire0.2-0.2.7-6.el8.src.rpm\npotrace-1.15-3.el8.src.rpm\ntracker-2.1.5-2.el8.src.rpm\nvte291-0.52.4-2.el8.src.rpm\nwebkit2gtk3-2.28.4-1.el8.src.rpm\nwebrtc-audio-processing-0.3-9.el8.src.rpm\nxdg-desktop-portal-1.6.0-2.el8.src.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.src.rpm\n\naarch64:\nPackageKit-1.1.12-6.el8.aarch64.rpm\nPackageKit-command-not-found-1.1.12-6.el8.aarch64.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-cron-1.1.12-6.el8.aarch64.rpm\nPackageKit-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-debugsource-1.1.12-6.el8.aarch64.rpm\nPackageKit-glib-1.1.12-6.el8.aarch64.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.aarch64.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.aarch64.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.aarch64.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.aarch64.rpm\nfrei0r-plugins-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.aarch64.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.aarch64.rpm\ngdm-3.28.3-34.el8.aarch64.rpm\ngdm-debuginfo-3.28.3-34.el8.aarch64.rpm\ngdm-debugsource-3.28.3-34.el8.aarch64.rpm\ngnome-control-center-3.28.2-22.el8.aarch64.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.aarch64.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.aarch64.rpm\ngnome-remote-desktop-0.1.8-3.el8.aarch64.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.aarch64.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.aarch64.rpm\ngnome-session-3.28.1-10.el8.aarch64.rpm\ngnome-session-debuginfo-3.28.1-10.el8.aarch64.rpm\ngnome-session-debugsource-3.28.1-10.el8.aarch64.rpm\ngnome-session-wayland-session-3.28.1-10.el8.aarch64.rpm\ngnome-session-xsession-3.28.1-10.el8.aarch64.rpm\ngnome-settings-daemon-3.32.0-11.el8.aarch64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.aarch64.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.aarch64.rpm\ngnome-shell-3.32.2-20.el8.aarch64.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.aarch64.rpm\ngnome-shell-debugsource-3.32.2-20.el8.aarch64.rpm\ngnome-terminal-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.aarch64.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.aarch64.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.aarch64.rpm\ngtk-update-icon-cache-3.22.30-6.el8.aarch64.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-3.22.30-6.el8.aarch64.rpm\ngtk3-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-debugsource-3.22.30-6.el8.aarch64.rpm\ngtk3-devel-3.22.30-6.el8.aarch64.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-immodule-xim-3.22.30-6.el8.aarch64.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.aarch64.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.aarch64.rpm\ngvfs-1.36.2-10.el8.aarch64.rpm\ngvfs-afc-1.36.2-10.el8.aarch64.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-afp-1.36.2-10.el8.aarch64.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-archive-1.36.2-10.el8.aarch64.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-client-1.36.2-10.el8.aarch64.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-debugsource-1.36.2-10.el8.aarch64.rpm\ngvfs-devel-1.36.2-10.el8.aarch64.rpm\ngvfs-fuse-1.36.2-10.el8.aarch64.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-goa-1.36.2-10.el8.aarch64.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-gphoto2-1.36.2-10.el8.aarch64.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-mtp-1.36.2-10.el8.aarch64.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.aarch64.rpm\ngvfs-smb-1.36.2-10.el8.aarch64.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.aarch64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.aarch64.rpm\nlibsoup-debugsource-2.62.3-2.el8.aarch64.rpm\nlibsoup-devel-2.62.3-2.el8.aarch64.rpm\nmutter-3.32.2-48.el8.aarch64.rpm\nmutter-debuginfo-3.32.2-48.el8.aarch64.rpm\nmutter-debugsource-3.32.2-48.el8.aarch64.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.aarch64.rpm\nnautilus-3.28.1-14.el8.aarch64.rpm\nnautilus-debuginfo-3.28.1-14.el8.aarch64.rpm\nnautilus-debugsource-3.28.1-14.el8.aarch64.rpm\nnautilus-extensions-3.28.1-14.el8.aarch64.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.aarch64.rpm\npipewire-0.3.6-1.el8.aarch64.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-debugsource-0.3.6-1.el8.aarch64.rpm\npipewire-devel-0.3.6-1.el8.aarch64.rpm\npipewire-doc-0.3.6-1.el8.aarch64.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-libs-0.3.6-1.el8.aarch64.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire-utils-0.3.6-1.el8.aarch64.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.aarch64.rpm\npipewire0.2-debugsource-0.2.7-6.el8.aarch64.rpm\npipewire0.2-devel-0.2.7-6.el8.aarch64.rpm\npipewire0.2-libs-0.2.7-6.el8.aarch64.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.aarch64.rpm\npotrace-1.15-3.el8.aarch64.rpm\npotrace-debuginfo-1.15-3.el8.aarch64.rpm\npotrace-debugsource-1.15-3.el8.aarch64.rpm\npygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm\npygobject3-debugsource-3.28.3-2.el8.aarch64.rpm\npython3-gobject-3.28.3-2.el8.aarch64.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm\ntracker-2.1.5-2.el8.aarch64.rpm\ntracker-debuginfo-2.1.5-2.el8.aarch64.rpm\ntracker-debugsource-2.1.5-2.el8.aarch64.rpm\nvte-profile-0.52.4-2.el8.aarch64.rpm\nvte291-0.52.4-2.el8.aarch64.rpm\nvte291-debuginfo-0.52.4-2.el8.aarch64.rpm\nvte291-debugsource-0.52.4-2.el8.aarch64.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.aarch64.rpm\nwebkit2gtk3-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.aarch64.rpm\nwebrtc-audio-processing-0.3-9.el8.aarch64.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.aarch64.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.aarch64.rpm\nxdg-desktop-portal-1.6.0-2.el8.aarch64.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.aarch64.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.aarch64.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.aarch64.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.aarch64.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.aarch64.rpm\n\nnoarch:\ngnome-classic-session-3.32.1-11.el8.noarch.rpm\ngnome-control-center-filesystem-3.28.2-22.el8.noarch.rpm\ngnome-shell-extension-apps-menu-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-auto-move-windows-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-common-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-dash-to-dock-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-desktop-icons-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-disable-screenshield-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-drive-menu-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-horizontal-workspaces-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-launch-new-instance-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-native-window-placement-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-no-hot-corner-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-panel-favorites-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-places-menu-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-screenshot-window-sizer-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-systemMonitor-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-top-icons-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-updates-dialog-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-user-theme-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-window-grouper-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-window-list-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-windowsNavigator-3.32.1-11.el8.noarch.rpm\ngnome-shell-extension-workspace-indicator-3.32.1-11.el8.noarch.rpm\n\nppc64le:\nLibRaw-0.19.5-2.el8.ppc64le.rpm\nLibRaw-debuginfo-0.19.5-2.el8.ppc64le.rpm\nLibRaw-debugsource-0.19.5-2.el8.ppc64le.rpm\nLibRaw-samples-debuginfo-0.19.5-2.el8.ppc64le.rpm\nPackageKit-1.1.12-6.el8.ppc64le.rpm\nPackageKit-command-not-found-1.1.12-6.el8.ppc64le.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-cron-1.1.12-6.el8.ppc64le.rpm\nPackageKit-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-debugsource-1.1.12-6.el8.ppc64le.rpm\nPackageKit-glib-1.1.12-6.el8.ppc64le.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.ppc64le.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.ppc64le.rpm\ndleyna-renderer-0.6.0-3.el8.ppc64le.rpm\ndleyna-renderer-debuginfo-0.6.0-3.el8.ppc64le.rpm\ndleyna-renderer-debugsource-0.6.0-3.el8.ppc64le.rpm\nfrei0r-plugins-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.ppc64le.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.ppc64le.rpm\ngdm-3.28.3-34.el8.ppc64le.rpm\ngdm-debuginfo-3.28.3-34.el8.ppc64le.rpm\ngdm-debugsource-3.28.3-34.el8.ppc64le.rpm\ngnome-control-center-3.28.2-22.el8.ppc64le.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.ppc64le.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.ppc64le.rpm\ngnome-photos-3.28.1-3.el8.ppc64le.rpm\ngnome-photos-debuginfo-3.28.1-3.el8.ppc64le.rpm\ngnome-photos-debugsource-3.28.1-3.el8.ppc64le.rpm\ngnome-photos-tests-3.28.1-3.el8.ppc64le.rpm\ngnome-remote-desktop-0.1.8-3.el8.ppc64le.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.ppc64le.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.ppc64le.rpm\ngnome-session-3.28.1-10.el8.ppc64le.rpm\ngnome-session-debuginfo-3.28.1-10.el8.ppc64le.rpm\ngnome-session-debugsource-3.28.1-10.el8.ppc64le.rpm\ngnome-session-wayland-session-3.28.1-10.el8.ppc64le.rpm\ngnome-session-xsession-3.28.1-10.el8.ppc64le.rpm\ngnome-settings-daemon-3.32.0-11.el8.ppc64le.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.ppc64le.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.ppc64le.rpm\ngnome-shell-3.32.2-20.el8.ppc64le.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.ppc64le.rpm\ngnome-shell-debugsource-3.32.2-20.el8.ppc64le.rpm\ngnome-terminal-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.ppc64le.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.ppc64le.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.ppc64le.rpm\ngtk-update-icon-cache-3.22.30-6.el8.ppc64le.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-3.22.30-6.el8.ppc64le.rpm\ngtk3-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-debugsource-3.22.30-6.el8.ppc64le.rpm\ngtk3-devel-3.22.30-6.el8.ppc64le.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-immodule-xim-3.22.30-6.el8.ppc64le.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.ppc64le.rpm\ngvfs-1.36.2-10.el8.ppc64le.rpm\ngvfs-afc-1.36.2-10.el8.ppc64le.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-afp-1.36.2-10.el8.ppc64le.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-archive-1.36.2-10.el8.ppc64le.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-client-1.36.2-10.el8.ppc64le.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-debugsource-1.36.2-10.el8.ppc64le.rpm\ngvfs-devel-1.36.2-10.el8.ppc64le.rpm\ngvfs-fuse-1.36.2-10.el8.ppc64le.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-goa-1.36.2-10.el8.ppc64le.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-gphoto2-1.36.2-10.el8.ppc64le.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-mtp-1.36.2-10.el8.ppc64le.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.ppc64le.rpm\ngvfs-smb-1.36.2-10.el8.ppc64le.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.ppc64le.rpm\nlibsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm\nlibsoup-debugsource-2.62.3-2.el8.ppc64le.rpm\nlibsoup-devel-2.62.3-2.el8.ppc64le.rpm\nmutter-3.32.2-48.el8.ppc64le.rpm\nmutter-debuginfo-3.32.2-48.el8.ppc64le.rpm\nmutter-debugsource-3.32.2-48.el8.ppc64le.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.ppc64le.rpm\nnautilus-3.28.1-14.el8.ppc64le.rpm\nnautilus-debuginfo-3.28.1-14.el8.ppc64le.rpm\nnautilus-debugsource-3.28.1-14.el8.ppc64le.rpm\nnautilus-extensions-3.28.1-14.el8.ppc64le.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.ppc64le.rpm\npipewire-0.3.6-1.el8.ppc64le.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-debugsource-0.3.6-1.el8.ppc64le.rpm\npipewire-devel-0.3.6-1.el8.ppc64le.rpm\npipewire-doc-0.3.6-1.el8.ppc64le.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-libs-0.3.6-1.el8.ppc64le.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire-utils-0.3.6-1.el8.ppc64le.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.ppc64le.rpm\npipewire0.2-debugsource-0.2.7-6.el8.ppc64le.rpm\npipewire0.2-devel-0.2.7-6.el8.ppc64le.rpm\npipewire0.2-libs-0.2.7-6.el8.ppc64le.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.ppc64le.rpm\npotrace-1.15-3.el8.ppc64le.rpm\npotrace-debuginfo-1.15-3.el8.ppc64le.rpm\npotrace-debugsource-1.15-3.el8.ppc64le.rpm\npygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm\npygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm\ntracker-2.1.5-2.el8.ppc64le.rpm\ntracker-debuginfo-2.1.5-2.el8.ppc64le.rpm\ntracker-debugsource-2.1.5-2.el8.ppc64le.rpm\nvte-profile-0.52.4-2.el8.ppc64le.rpm\nvte291-0.52.4-2.el8.ppc64le.rpm\nvte291-debuginfo-0.52.4-2.el8.ppc64le.rpm\nvte291-debugsource-0.52.4-2.el8.ppc64le.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.ppc64le.rpm\nwebkit2gtk3-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.ppc64le.rpm\nwebrtc-audio-processing-0.3-9.el8.ppc64le.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.ppc64le.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.ppc64le.rpm\nxdg-desktop-portal-1.6.0-2.el8.ppc64le.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.ppc64le.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.ppc64le.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.ppc64le.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.ppc64le.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.ppc64le.rpm\n\ns390x:\nPackageKit-1.1.12-6.el8.s390x.rpm\nPackageKit-command-not-found-1.1.12-6.el8.s390x.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-cron-1.1.12-6.el8.s390x.rpm\nPackageKit-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-debugsource-1.1.12-6.el8.s390x.rpm\nPackageKit-glib-1.1.12-6.el8.s390x.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.s390x.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.s390x.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.s390x.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.s390x.rpm\nfrei0r-plugins-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.s390x.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.s390x.rpm\ngdm-3.28.3-34.el8.s390x.rpm\ngdm-debuginfo-3.28.3-34.el8.s390x.rpm\ngdm-debugsource-3.28.3-34.el8.s390x.rpm\ngnome-control-center-3.28.2-22.el8.s390x.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.s390x.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.s390x.rpm\ngnome-remote-desktop-0.1.8-3.el8.s390x.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.s390x.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.s390x.rpm\ngnome-session-3.28.1-10.el8.s390x.rpm\ngnome-session-debuginfo-3.28.1-10.el8.s390x.rpm\ngnome-session-debugsource-3.28.1-10.el8.s390x.rpm\ngnome-session-wayland-session-3.28.1-10.el8.s390x.rpm\ngnome-session-xsession-3.28.1-10.el8.s390x.rpm\ngnome-settings-daemon-3.32.0-11.el8.s390x.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.s390x.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.s390x.rpm\ngnome-shell-3.32.2-20.el8.s390x.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.s390x.rpm\ngnome-shell-debugsource-3.32.2-20.el8.s390x.rpm\ngnome-terminal-3.28.3-2.el8.s390x.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.s390x.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.s390x.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.s390x.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.s390x.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.s390x.rpm\ngtk-update-icon-cache-3.22.30-6.el8.s390x.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-3.22.30-6.el8.s390x.rpm\ngtk3-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-debugsource-3.22.30-6.el8.s390x.rpm\ngtk3-devel-3.22.30-6.el8.s390x.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-immodule-xim-3.22.30-6.el8.s390x.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.s390x.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.s390x.rpm\ngvfs-1.36.2-10.el8.s390x.rpm\ngvfs-afp-1.36.2-10.el8.s390x.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-archive-1.36.2-10.el8.s390x.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-client-1.36.2-10.el8.s390x.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-debugsource-1.36.2-10.el8.s390x.rpm\ngvfs-devel-1.36.2-10.el8.s390x.rpm\ngvfs-fuse-1.36.2-10.el8.s390x.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-goa-1.36.2-10.el8.s390x.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-gphoto2-1.36.2-10.el8.s390x.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-mtp-1.36.2-10.el8.s390x.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.s390x.rpm\ngvfs-smb-1.36.2-10.el8.s390x.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.s390x.rpm\nlibsoup-debuginfo-2.62.3-2.el8.s390x.rpm\nlibsoup-debugsource-2.62.3-2.el8.s390x.rpm\nlibsoup-devel-2.62.3-2.el8.s390x.rpm\nmutter-3.32.2-48.el8.s390x.rpm\nmutter-debuginfo-3.32.2-48.el8.s390x.rpm\nmutter-debugsource-3.32.2-48.el8.s390x.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.s390x.rpm\nnautilus-3.28.1-14.el8.s390x.rpm\nnautilus-debuginfo-3.28.1-14.el8.s390x.rpm\nnautilus-debugsource-3.28.1-14.el8.s390x.rpm\nnautilus-extensions-3.28.1-14.el8.s390x.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.s390x.rpm\npipewire-0.3.6-1.el8.s390x.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-debugsource-0.3.6-1.el8.s390x.rpm\npipewire-devel-0.3.6-1.el8.s390x.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-libs-0.3.6-1.el8.s390x.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire-utils-0.3.6-1.el8.s390x.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.s390x.rpm\npipewire0.2-debugsource-0.2.7-6.el8.s390x.rpm\npipewire0.2-devel-0.2.7-6.el8.s390x.rpm\npipewire0.2-libs-0.2.7-6.el8.s390x.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.s390x.rpm\npotrace-1.15-3.el8.s390x.rpm\npotrace-debuginfo-1.15-3.el8.s390x.rpm\npotrace-debugsource-1.15-3.el8.s390x.rpm\npygobject3-debuginfo-3.28.3-2.el8.s390x.rpm\npygobject3-debugsource-3.28.3-2.el8.s390x.rpm\npython3-gobject-3.28.3-2.el8.s390x.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm\ntracker-2.1.5-2.el8.s390x.rpm\ntracker-debuginfo-2.1.5-2.el8.s390x.rpm\ntracker-debugsource-2.1.5-2.el8.s390x.rpm\nvte-profile-0.52.4-2.el8.s390x.rpm\nvte291-0.52.4-2.el8.s390x.rpm\nvte291-debuginfo-0.52.4-2.el8.s390x.rpm\nvte291-debugsource-0.52.4-2.el8.s390x.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.s390x.rpm\nwebkit2gtk3-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.s390x.rpm\nwebrtc-audio-processing-0.3-9.el8.s390x.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.s390x.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.s390x.rpm\nxdg-desktop-portal-1.6.0-2.el8.s390x.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.s390x.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.s390x.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.s390x.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.s390x.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.s390x.rpm\n\nx86_64:\nLibRaw-0.19.5-2.el8.i686.rpm\nLibRaw-0.19.5-2.el8.x86_64.rpm\nLibRaw-debuginfo-0.19.5-2.el8.i686.rpm\nLibRaw-debuginfo-0.19.5-2.el8.x86_64.rpm\nLibRaw-debugsource-0.19.5-2.el8.i686.rpm\nLibRaw-debugsource-0.19.5-2.el8.x86_64.rpm\nLibRaw-samples-debuginfo-0.19.5-2.el8.i686.rpm\nLibRaw-samples-debuginfo-0.19.5-2.el8.x86_64.rpm\nPackageKit-1.1.12-6.el8.x86_64.rpm\nPackageKit-command-not-found-1.1.12-6.el8.x86_64.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-command-not-found-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-cron-1.1.12-6.el8.x86_64.rpm\nPackageKit-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-debugsource-1.1.12-6.el8.i686.rpm\nPackageKit-debugsource-1.1.12-6.el8.x86_64.rpm\nPackageKit-glib-1.1.12-6.el8.i686.rpm\nPackageKit-glib-1.1.12-6.el8.x86_64.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-glib-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-gstreamer-plugin-1.1.12-6.el8.x86_64.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-gstreamer-plugin-debuginfo-1.1.12-6.el8.x86_64.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.i686.rpm\nPackageKit-gtk3-module-1.1.12-6.el8.x86_64.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.i686.rpm\nPackageKit-gtk3-module-debuginfo-1.1.12-6.el8.x86_64.rpm\ndleyna-renderer-0.6.0-3.el8.x86_64.rpm\ndleyna-renderer-debuginfo-0.6.0-3.el8.x86_64.rpm\ndleyna-renderer-debugsource-0.6.0-3.el8.x86_64.rpm\nfrei0r-plugins-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-debuginfo-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-debugsource-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-opencv-1.6.1-7.el8.x86_64.rpm\nfrei0r-plugins-opencv-debuginfo-1.6.1-7.el8.x86_64.rpm\ngdm-3.28.3-34.el8.i686.rpm\ngdm-3.28.3-34.el8.x86_64.rpm\ngdm-debuginfo-3.28.3-34.el8.i686.rpm\ngdm-debuginfo-3.28.3-34.el8.x86_64.rpm\ngdm-debugsource-3.28.3-34.el8.i686.rpm\ngdm-debugsource-3.28.3-34.el8.x86_64.rpm\ngnome-control-center-3.28.2-22.el8.x86_64.rpm\ngnome-control-center-debuginfo-3.28.2-22.el8.x86_64.rpm\ngnome-control-center-debugsource-3.28.2-22.el8.x86_64.rpm\ngnome-photos-3.28.1-3.el8.x86_64.rpm\ngnome-photos-debuginfo-3.28.1-3.el8.x86_64.rpm\ngnome-photos-debugsource-3.28.1-3.el8.x86_64.rpm\ngnome-photos-tests-3.28.1-3.el8.x86_64.rpm\ngnome-remote-desktop-0.1.8-3.el8.x86_64.rpm\ngnome-remote-desktop-debuginfo-0.1.8-3.el8.x86_64.rpm\ngnome-remote-desktop-debugsource-0.1.8-3.el8.x86_64.rpm\ngnome-session-3.28.1-10.el8.x86_64.rpm\ngnome-session-debuginfo-3.28.1-10.el8.x86_64.rpm\ngnome-session-debugsource-3.28.1-10.el8.x86_64.rpm\ngnome-session-wayland-session-3.28.1-10.el8.x86_64.rpm\ngnome-session-xsession-3.28.1-10.el8.x86_64.rpm\ngnome-settings-daemon-3.32.0-11.el8.x86_64.rpm\ngnome-settings-daemon-debuginfo-3.32.0-11.el8.x86_64.rpm\ngnome-settings-daemon-debugsource-3.32.0-11.el8.x86_64.rpm\ngnome-shell-3.32.2-20.el8.x86_64.rpm\ngnome-shell-debuginfo-3.32.2-20.el8.x86_64.rpm\ngnome-shell-debugsource-3.32.2-20.el8.x86_64.rpm\ngnome-terminal-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-debuginfo-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-debugsource-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-nautilus-3.28.3-2.el8.x86_64.rpm\ngnome-terminal-nautilus-debuginfo-3.28.3-2.el8.x86_64.rpm\ngsettings-desktop-schemas-3.32.0-5.el8.i686.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.i686.rpm\ngsettings-desktop-schemas-devel-3.32.0-5.el8.x86_64.rpm\ngtk-update-icon-cache-3.22.30-6.el8.x86_64.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.i686.rpm\ngtk-update-icon-cache-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-3.22.30-6.el8.i686.rpm\ngtk3-3.22.30-6.el8.x86_64.rpm\ngtk3-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-debugsource-3.22.30-6.el8.i686.rpm\ngtk3-debugsource-3.22.30-6.el8.x86_64.rpm\ngtk3-devel-3.22.30-6.el8.i686.rpm\ngtk3-devel-3.22.30-6.el8.x86_64.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-devel-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-immodule-xim-3.22.30-6.el8.x86_64.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-immodule-xim-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-immodules-debuginfo-3.22.30-6.el8.x86_64.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.i686.rpm\ngtk3-tests-debuginfo-3.22.30-6.el8.x86_64.rpm\ngvfs-1.36.2-10.el8.x86_64.rpm\ngvfs-afc-1.36.2-10.el8.x86_64.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-afc-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-afp-1.36.2-10.el8.x86_64.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-afp-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-archive-1.36.2-10.el8.x86_64.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-archive-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-client-1.36.2-10.el8.i686.rpm\ngvfs-client-1.36.2-10.el8.x86_64.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-client-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-debugsource-1.36.2-10.el8.i686.rpm\ngvfs-debugsource-1.36.2-10.el8.x86_64.rpm\ngvfs-devel-1.36.2-10.el8.i686.rpm\ngvfs-devel-1.36.2-10.el8.x86_64.rpm\ngvfs-fuse-1.36.2-10.el8.x86_64.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-fuse-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-goa-1.36.2-10.el8.x86_64.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-goa-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-gphoto2-1.36.2-10.el8.x86_64.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-gphoto2-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-mtp-1.36.2-10.el8.x86_64.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-mtp-debuginfo-1.36.2-10.el8.x86_64.rpm\ngvfs-smb-1.36.2-10.el8.x86_64.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.i686.rpm\ngvfs-smb-debuginfo-1.36.2-10.el8.x86_64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.i686.rpm\nlibsoup-debuginfo-2.62.3-2.el8.x86_64.rpm\nlibsoup-debugsource-2.62.3-2.el8.i686.rpm\nlibsoup-debugsource-2.62.3-2.el8.x86_64.rpm\nlibsoup-devel-2.62.3-2.el8.i686.rpm\nlibsoup-devel-2.62.3-2.el8.x86_64.rpm\nmutter-3.32.2-48.el8.i686.rpm\nmutter-3.32.2-48.el8.x86_64.rpm\nmutter-debuginfo-3.32.2-48.el8.i686.rpm\nmutter-debuginfo-3.32.2-48.el8.x86_64.rpm\nmutter-debugsource-3.32.2-48.el8.i686.rpm\nmutter-debugsource-3.32.2-48.el8.x86_64.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.i686.rpm\nmutter-tests-debuginfo-3.32.2-48.el8.x86_64.rpm\nnautilus-3.28.1-14.el8.x86_64.rpm\nnautilus-debuginfo-3.28.1-14.el8.i686.rpm\nnautilus-debuginfo-3.28.1-14.el8.x86_64.rpm\nnautilus-debugsource-3.28.1-14.el8.i686.rpm\nnautilus-debugsource-3.28.1-14.el8.x86_64.rpm\nnautilus-extensions-3.28.1-14.el8.i686.rpm\nnautilus-extensions-3.28.1-14.el8.x86_64.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.i686.rpm\nnautilus-extensions-debuginfo-3.28.1-14.el8.x86_64.rpm\npipewire-0.3.6-1.el8.i686.rpm\npipewire-0.3.6-1.el8.x86_64.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-alsa-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-debugsource-0.3.6-1.el8.i686.rpm\npipewire-debugsource-0.3.6-1.el8.x86_64.rpm\npipewire-devel-0.3.6-1.el8.i686.rpm\npipewire-devel-0.3.6-1.el8.x86_64.rpm\npipewire-doc-0.3.6-1.el8.x86_64.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-gstreamer-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-libs-0.3.6-1.el8.i686.rpm\npipewire-libs-0.3.6-1.el8.x86_64.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-libs-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire-utils-0.3.6-1.el8.x86_64.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.i686.rpm\npipewire-utils-debuginfo-0.3.6-1.el8.x86_64.rpm\npipewire0.2-debugsource-0.2.7-6.el8.i686.rpm\npipewire0.2-debugsource-0.2.7-6.el8.x86_64.rpm\npipewire0.2-devel-0.2.7-6.el8.i686.rpm\npipewire0.2-devel-0.2.7-6.el8.x86_64.rpm\npipewire0.2-libs-0.2.7-6.el8.i686.rpm\npipewire0.2-libs-0.2.7-6.el8.x86_64.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.i686.rpm\npipewire0.2-libs-debuginfo-0.2.7-6.el8.x86_64.rpm\npotrace-1.15-3.el8.i686.rpm\npotrace-1.15-3.el8.x86_64.rpm\npotrace-debuginfo-1.15-3.el8.i686.rpm\npotrace-debuginfo-1.15-3.el8.x86_64.rpm\npotrace-debugsource-1.15-3.el8.i686.rpm\npotrace-debugsource-1.15-3.el8.x86_64.rpm\npygobject3-debuginfo-3.28.3-2.el8.i686.rpm\npygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm\npygobject3-debugsource-3.28.3-2.el8.i686.rpm\npygobject3-debugsource-3.28.3-2.el8.x86_64.rpm\npython3-gobject-3.28.3-2.el8.i686.rpm\npython3-gobject-3.28.3-2.el8.x86_64.rpm\npython3-gobject-base-3.28.3-2.el8.i686.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.i686.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.i686.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm\ntracker-2.1.5-2.el8.i686.rpm\ntracker-2.1.5-2.el8.x86_64.rpm\ntracker-debuginfo-2.1.5-2.el8.i686.rpm\ntracker-debuginfo-2.1.5-2.el8.x86_64.rpm\ntracker-debugsource-2.1.5-2.el8.i686.rpm\ntracker-debugsource-2.1.5-2.el8.x86_64.rpm\nvte-profile-0.52.4-2.el8.x86_64.rpm\nvte291-0.52.4-2.el8.i686.rpm\nvte291-0.52.4-2.el8.x86_64.rpm\nvte291-debuginfo-0.52.4-2.el8.i686.rpm\nvte291-debuginfo-0.52.4-2.el8.x86_64.rpm\nvte291-debugsource-0.52.4-2.el8.i686.rpm\nvte291-debugsource-0.52.4-2.el8.x86_64.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.i686.rpm\nvte291-devel-debuginfo-0.52.4-2.el8.x86_64.rpm\nwebkit2gtk3-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-devel-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.28.4-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.28.4-1.el8.x86_64.rpm\nwebrtc-audio-processing-0.3-9.el8.i686.rpm\nwebrtc-audio-processing-0.3-9.el8.x86_64.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.i686.rpm\nwebrtc-audio-processing-debuginfo-0.3-9.el8.x86_64.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.i686.rpm\nwebrtc-audio-processing-debugsource-0.3-9.el8.x86_64.rpm\nxdg-desktop-portal-1.6.0-2.el8.x86_64.rpm\nxdg-desktop-portal-debuginfo-1.6.0-2.el8.x86_64.rpm\nxdg-desktop-portal-debugsource-1.6.0-2.el8.x86_64.rpm\nxdg-desktop-portal-gtk-1.6.0-1.el8.x86_64.rpm\nxdg-desktop-portal-gtk-debuginfo-1.6.0-1.el8.x86_64.rpm\nxdg-desktop-portal-gtk-debugsource-1.6.0-1.el8.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\ngsettings-desktop-schemas-3.32.0-5.el8.src.rpm\nlibsoup-2.62.3-2.el8.src.rpm\npygobject3-3.28.3-2.el8.src.rpm\n\naarch64:\ngsettings-desktop-schemas-3.32.0-5.el8.aarch64.rpm\nlibsoup-2.62.3-2.el8.aarch64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.aarch64.rpm\nlibsoup-debugsource-2.62.3-2.el8.aarch64.rpm\npygobject3-debuginfo-3.28.3-2.el8.aarch64.rpm\npygobject3-debugsource-3.28.3-2.el8.aarch64.rpm\npython3-gobject-base-3.28.3-2.el8.aarch64.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.aarch64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.aarch64.rpm\n\nppc64le:\ngsettings-desktop-schemas-3.32.0-5.el8.ppc64le.rpm\nlibsoup-2.62.3-2.el8.ppc64le.rpm\nlibsoup-debuginfo-2.62.3-2.el8.ppc64le.rpm\nlibsoup-debugsource-2.62.3-2.el8.ppc64le.rpm\npygobject3-debuginfo-3.28.3-2.el8.ppc64le.rpm\npygobject3-debugsource-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-base-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.ppc64le.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.ppc64le.rpm\n\ns390x:\ngsettings-desktop-schemas-3.32.0-5.el8.s390x.rpm\nlibsoup-2.62.3-2.el8.s390x.rpm\nlibsoup-debuginfo-2.62.3-2.el8.s390x.rpm\nlibsoup-debugsource-2.62.3-2.el8.s390x.rpm\npygobject3-debuginfo-3.28.3-2.el8.s390x.rpm\npygobject3-debugsource-3.28.3-2.el8.s390x.rpm\npython3-gobject-base-3.28.3-2.el8.s390x.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.s390x.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.s390x.rpm\n\nx86_64:\ngsettings-desktop-schemas-3.32.0-5.el8.x86_64.rpm\nlibsoup-2.62.3-2.el8.i686.rpm\nlibsoup-2.62.3-2.el8.x86_64.rpm\nlibsoup-debuginfo-2.62.3-2.el8.i686.rpm\nlibsoup-debuginfo-2.62.3-2.el8.x86_64.rpm\nlibsoup-debugsource-2.62.3-2.el8.i686.rpm\nlibsoup-debugsource-2.62.3-2.el8.x86_64.rpm\npygobject3-debuginfo-3.28.3-2.el8.x86_64.rpm\npygobject3-debugsource-3.28.3-2.el8.x86_64.rpm\npython3-gobject-base-3.28.3-2.el8.x86_64.rpm\npython3-gobject-base-debuginfo-3.28.3-2.el8.x86_64.rpm\npython3-gobject-debuginfo-3.28.3-2.el8.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "158688"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "158458"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 2.52
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-9895",
        "trust": 3.4
      },
      {
        "db": "PACKETSTORM",
        "id": "158688",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU95491800",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2434",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2659",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2812",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2785",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "158466",
        "trust": 0.6
      },
      {
        "db": "NSFOCUS",
        "id": "50226",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51500",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-188020",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9895",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168885",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "158458",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "158688"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "158458"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "id": "VAR-202010-1296",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:07:41.752000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT211292",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211292"
      },
      {
        "title": "HT211293",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211293"
      },
      {
        "title": "HT211294",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211294"
      },
      {
        "title": "HT211295",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211295"
      },
      {
        "title": "HT211288",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211288"
      },
      {
        "title": "HT211290",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211290"
      },
      {
        "title": "HT211291",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211291"
      },
      {
        "title": "HT211293",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211293"
      },
      {
        "title": "HT211294",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211294"
      },
      {
        "title": "HT211295",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211295"
      },
      {
        "title": "HT211288",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211288"
      },
      {
        "title": "HT211290",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211290"
      },
      {
        "title": "HT211291",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211291"
      },
      {
        "title": "HT211292",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211292"
      },
      {
        "title": "Multiple Apple product WebKit Fixes for component security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=124597"
      },
      {
        "title": "Debian Security Advisories: DSA-4739-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=1f5a43d66ba1bd3cae01970bb774e5f6"
      },
      {
        "title": "Apple: iCloud for Windows 7.20",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=50e6b35a047c9702f4cdebdf81483b05"
      },
      {
        "title": "Apple: iCloud for Windows 11.3",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=947a08401ec7e5f309d5ae26f5006f48"
      },
      {
        "title": "Apple: iOS 13.6 and iPadOS 13.6",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=a82d39d4c9a42fcf07757428b2f562b3"
      },
      {
        "title": "freedom",
        "trust": 0.1,
        "url": "https://github.com/sslab-gatech/freedom "
      },
      {
        "title": null,
        "trust": 0.1,
        "url": "https://www.theregister.co.uk/2020/07/16/apple_july_updates/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-416",
        "trust": 1.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211288"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211290"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211291"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211292"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211293"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211294"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211295"
      },
      {
        "trust": 1.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9895"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-9895"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu#94090210/index.html"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu95491800/index.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2020/suse-su-20202232-1"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "http://www.nsfocus.net/vulndb/50226"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2785/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211291"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2434/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-multiple-vulnerabilities-32994"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2659/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2812/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-32847"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211295"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211294"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211293"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht211292"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158466/apple-security-advisory-2020-07-15-5.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158688/gentoo-linux-security-advisory-202007-61.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9915"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9894"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9893"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9925"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9862"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/416.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://exchange.xforce.ibmcloud.com/vulnerabilities/185387"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/sslab-gatech/freedom"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202007-61"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9889"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9909"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9933"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9910"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9914"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9888"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14899"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9891"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9936"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9907"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9890"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-9865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/site/solutions/537113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "158688"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "158458"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "db": "PACKETSTORM",
        "id": "158688"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "158458"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-10-16T00:00:00",
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "date": "2020-10-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "date": "2020-08-03T12:12:00",
        "db": "PACKETSTORM",
        "id": "168885"
      },
      {
        "date": "2020-07-31T19:38:59",
        "db": "PACKETSTORM",
        "id": "158688"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-07-17T19:24:07",
        "db": "PACKETSTORM",
        "id": "158458"
      },
      {
        "date": "2020-11-04T15:24:00",
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "date": "2020-07-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "date": "2020-12-11T05:02:09",
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "date": "2020-10-16T17:15:16.450000",
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-188020"
      },
      {
        "date": "2020-10-20T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-9895"
      },
      {
        "date": "2023-01-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      },
      {
        "date": "2020-12-11T05:02:09",
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      },
      {
        "date": "2024-11-21T05:41:29.190000",
        "db": "NVD",
        "id": "CVE-2020-9895"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Product free memory usage vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-009903"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "resource management error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202007-1146"
      }
    ],
    "trust": 0.6
  }
}

VAR-202201-0304

Vulnerability from variot - Updated: 2025-12-22 23:07

A logic issue was addressed with improved state management. This issue is fixed in iOS 15.3 and iPadOS 15.3, watchOS 8.4, tvOS 15.3, Safari 15.3, macOS Monterey 12.2. Processing maliciously crafted web content may prevent Content Security Policy from being enforced. plural Apple There are unspecified vulnerabilities in the product.Information may be tampered with. (CVE-2020-27918) "Clear History and Website Data" did not clear the history. A user may be unable to fully delete browsing history. This issue is fixed in macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave. (CVE-2021-1789) A port redirection issue was found in WebKitGTK and WPE WebKit in versions prior to 2.30.6. A malicious website may be able to access restricted ports on arbitrary servers. The highest threat from this vulnerability is to data integrity. 14610.4.3.1.7 and 15610.4.3.1.7), watchOS 7.3.2, macOS Big Sur 11.2.3. A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-1870) A use-after-free vulnerability exists in the way certain events are processed for ImageLoader objects of Webkit WebKitGTK 2.30.4. A specially crafted web page can lead to a potential information leak and further memory corruption. In order to trigger the vulnerability, a victim must be tricked into visiting a malicious webpage. (CVE-2021-21775) A use-after-free vulnerability exists in the way Webkit's GraphicsContext handles certain events in WebKitGTK 2.30.4. A specially crafted web page can lead to a potential information leak and further memory corruption. A victim must be tricked into visiting a malicious web page to trigger this vulnerability. (CVE-2021-21779) An exploitable use-after-free vulnerability exists in WebKitGTK browser version 2.30.3 x64. A specially crafted HTML web page can cause a use-after-free condition, resulting in remote code execution. The victim needs to visit a malicious web site to trigger the vulnerability. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. A malicious application may be able to leak sensitive user information. A malicious website may be able to access restricted ports on arbitrary servers. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30799) A use-after-free flaw was found in WebKitGTK. (CVE-2021-30809) A confusion type flaw was found in WebKitGTK. (CVE-2021-30818) An out-of-bounds read flaw was found in WebKitGTK. A specially crafted audio file could use this flaw to trigger a disclosure of memory when processed. (CVE-2021-30887) An information leak flaw was found in WebKitGTK. (CVE-2021-30888) A buffer overflow flaw was found in WebKitGTK. (CVE-2021-30984) ** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was in a CNA pool that was not assigned to any issues during 2021. Notes: none. (CVE-2021-32912) BubblewrapLauncher.cpp in WebKitGTK and WPE WebKit prior to 2.34.1 allows a limited sandbox bypass that allows a sandboxed process to trick host processes into thinking the sandboxed process is not confined by the sandbox, by abusing VFS syscalls that manipulate its filesystem namespace. The impact is limited to host services that create UNIX sockets that WebKit mounts inside its sandbox, and the sandboxed process remains otherwise confined. NOTE: this is similar to CVE-2021-41133. (CVE-2021-42762) A segmentation violation vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45481) A use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45482) A use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. Video self-preview in a webRTC call may be interrupted if the user answers a phone call. (CVE-2022-26719) In WebKitGTK up to and including 2.36.0 (and WPE WebKit), there is a heap-based buffer overflow in WebCore::TextureMapperLayer::setContentsLayer in WebCore/platform/graphics/texmap/TextureMapperLayer.cpp. (CVE-2022-32792) Multiple out-of-bounds write issues were addressed with improved bounds checking. An app may be able to disclose kernel memory. Visiting a website that frames malicious content may lead to UI spoofing. Visiting a malicious website may lead to user interface spoofing. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. (CVE-2022-46700) A flaw was found in the WebKitGTK package. An improper input validation issue may lead to a use-after-free vulnerability. This flaw allows attackers with network access to pass specially crafted web content files, causing a denial of service or arbitrary code execution. This may, in theory, allow a remote malicious user to create a specially crafted web page, trick the victim into opening it, trigger type confusion, and execute arbitrary code on the target system. (CVE-2023-23529) A use-after-free vulnerability in WebCore::RenderLayer::addChild in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25358) A use-after-free vulnerability in WebCore::RenderLayer::renderer in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25360) A use-after-free vulnerability in WebCore::RenderLayer::setNextSibling in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25361) A use-after-free vulnerability in WebCore::RenderLayer::repaintBlockSelectionGaps in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25362) A use-after-free vulnerability in WebCore::RenderLayer::updateDescendantDependentFlags in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25363) The vulnerability allows a remote malicious user to bypass Same Origin Policy restrictions. (CVE-2023-27932) The vulnerability exists due to excessive data output by the application. A remote attacker can track sensitive user information. Apple is aware of a report that this issue may have been actively exploited. (CVE-2023-32373) N/A (CVE-2023-32409). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: webkit2gtk3 security, bug fix, and enhancement update Advisory ID: RHSA-2022:1777-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1777 Issue date: 2022-05-10 CVE Names: CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 =====================================================================

  1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

The following packages have been upgraded to a later upstream version: webkit2gtk3 (2.34.6).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.34.6-1.el8.src.rpm

aarch64: webkit2gtk3-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm

ppc64le: webkit2gtk3-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm

s390x: webkit2gtk3-2.34.6-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm

x86_64: webkit2gtk3-2.34.6-1.el8.i686.rpm webkit2gtk3-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm webkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-30809 https://access.redhat.com/security/cve/CVE-2021-30818 https://access.redhat.com/security/cve/CVE-2021-30823 https://access.redhat.com/security/cve/CVE-2021-30836 https://access.redhat.com/security/cve/CVE-2021-30846 https://access.redhat.com/security/cve/CVE-2021-30848 https://access.redhat.com/security/cve/CVE-2021-30849 https://access.redhat.com/security/cve/CVE-2021-30851 https://access.redhat.com/security/cve/CVE-2021-30884 https://access.redhat.com/security/cve/CVE-2021-30887 https://access.redhat.com/security/cve/CVE-2021-30888 https://access.redhat.com/security/cve/CVE-2021-30889 https://access.redhat.com/security/cve/CVE-2021-30890 https://access.redhat.com/security/cve/CVE-2021-30897 https://access.redhat.com/security/cve/CVE-2021-30934 https://access.redhat.com/security/cve/CVE-2021-30936 https://access.redhat.com/security/cve/CVE-2021-30951 https://access.redhat.com/security/cve/CVE-2021-30952 https://access.redhat.com/security/cve/CVE-2021-30953 https://access.redhat.com/security/cve/CVE-2021-30954 https://access.redhat.com/security/cve/CVE-2021-30984 https://access.redhat.com/security/cve/CVE-2021-45481 https://access.redhat.com/security/cve/CVE-2021-45482 https://access.redhat.com/security/cve/CVE-2021-45483 https://access.redhat.com/security/cve/CVE-2022-22589 https://access.redhat.com/security/cve/CVE-2022-22590 https://access.redhat.com/security/cve/CVE-2022-22592 https://access.redhat.com/security/cve/CVE-2022-22594 https://access.redhat.com/security/cve/CVE-2022-22620 https://access.redhat.com/security/cve/CVE-2022-22637 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYnqQrdzjgjWX9erEAQi/6BAAhaqaCDj0g7uJ6LdXEng5SqGBFl5g6GIV p/WSKyL+tI3BpKaaUWr6+d4tNnaQbKxhRTwTSJa8GMrOc7n6Y7LO8Y7mQj3bEFvn z3HHLZK8EMgDUz4I0esuh0qNWnfsD/vJDuGbXlHLdNLlc5XzgX7YA6eIb1lxSbxV ueSENHohbMJLbWoeI2gMUYGb5cAzBHrgdmFIsr4XUd6sr5Z1ZOPnQPf36vrXGdzj mPXPijZtr9QiPgwijm4/DkJG7NQ4KyaPMOKauC7PEB1AHWIwHteRnVxnWuZLjpMf RqYBQu2pYeTiyGky+ozshJ81mdfLyUQBR/+4KbB2TMFZHDlhxzNFZrErh4+dfQja Cuf+IwTOSZgC/8XouTQMA27KFSYKd4PzwnB3yQeGU0NA/VngYp12BegeVHlDiadS hO+mAk/BAAesdywt7ZTU1e1yROLm/jp0VcmvkQO+gh2WhErEFV3s0qnsu1dfuLY7 B1e0z6c/vp8lkSFs2fcx0Oq1S7nGIGZiR66loghp03nDoCcxblsxBcFV9CNq6yVG BkEAFzMb/AHxqn7KbZeN6g4Los+3Dr7eFYPGUkVEXy+AbHqE+b99pT2TIjCOMh/L wXOE+nX3KXbD5MCqvmF2K6w+MfIf3AxzzgirwXyLewSP8NKBmsdBtgwbgFam1QfM Uqt+dghxtOQ= =LCNn -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

For the oldstable distribution (buster), these problems have been fixed in version 2.34.6-1~deb10u1.

For the stable distribution (bullseye), these problems have been fixed in version 2.34.6-1~deb11u1.

We recommend that you upgrade your webkit2gtk packages.

Alternatively, on your watch, select "My Watch > General > About".

Installation note:

Apple TV will periodically check for software updates.

AMD Kernel Available for: macOS Monterey Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22584: Mickey Jin (@patch1t) of Trend Micro

Crash Reporter Available for: macOS Monterey Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22578: an anonymous researcher

iCloud Available for: macOS Monterey Impact: An application may be able to access a user's files Description: An issue existed within the path validation logic for symlinks. CVE-2022-22591: Antonio Zekic (@antoniozekic) of Diverto

IOMobileFrameBuffer Available for: macOS Monterey Impact: A malicious application may be able to execute arbitrary code with kernel privileges. CVE-2022-22587: an anonymous researcher, Meysam Firouzi (@R00tkitSMM) of MBition - Mercedes-Benz Innovation Lab, Siddharth Aeri (@b1n4r1b01)

Kernel Available for: macOS Monterey Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A buffer overflow issue was addressed with improved memory handling. CVE-2022-22579: Mickey Jin (@patch1t) of Trend Micro

PackageKit Available for: macOS Monterey Impact: An application may be able to access restricted files Description: A permissions issue was addressed with improved validation. CVE-2022-22583: an anonymous researcher, Mickey Jin (@patch1t), Ron Hass (@ronhass7) of Perception Point

WebKit Available for: macOS Monterey Impact: Processing a maliciously crafted mail message may lead to running arbitrary javascript Description: A validation issue was addressed with improved input sanitization. CVE-2022-22592: Prakash (@1lastBr3ath)

WebKit Storage Available for: macOS Monterey Impact: A website may be able to track sensitive user information Description: A cross-origin issue in the IndexDB API was addressed with improved input validation. CVE-2022-22594: Martin Bajanik of FingerprintJS

Additional recognition

Kernel We would like to acknowledge Tao Huang for their assistance.

Metal We would like to acknowledge Tao Huang for their assistance.

PackageKit We would like to acknowledge Mickey Jin (@patch1t), Mickey Jin (@patch1t) of Trend Micro for their assistance.

WebKit We would like to acknowledge Prakash (@1lastBr3ath) for their assistance.

Installation note:

This update may be obtained from the Mac App Store

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmHx0zEACgkQeC9qKD1p rhhFUw/+K0ImMkw8zCjqdJza05Y6mlSa2wAdoVQK4pywYylcWMzemOi9GStr1Tgq CmA7KFo2hPp/9kh6+SURi91WwUKdNHsDfasjNDqTTOJalQFuKB0erZgEpcprBnHE lzT2heSJDl58sMSn0hLlGIhLfc+4Ld29FKc3lmtBPGeKY3vUViHLN0s3sZj07twx Ew7yYkDBkz/e2kGRDByWzmvsSQt+w7HMK+pN1m5CBTZdP8KAHVtbuv8BPtMHKwNJ 1Kzo6nW4MJ9Eds63Lz4A37nqTNxmvsbf4zDSppwAp8NalEHqg5My7PzmK97eh6ap jS4P4LqdRigTvRMq3eDVh/4Lie+/39nXwdQI6czETvTYzi+iA6k3q1Lsf2eIYzCf 0y4YTKEwIze05Q45YqbbnRDfVGOKtfZOcFVYsMxYBHBMp6LDcLJ9i0+AORX2igoA dODLICbrHzexa682FDE2RGtgQOtS5k4LJLUggvSeOW/tXN+MovfVTjCIzJSKYltP eQm8gq3EajaRk4JQcYkxklalyOHVZpkg3+u6Az+xIY5nVVijkuGDqqiCoh62zmsE kSXZrfuJTIYWbIzpR23xVMyxWBcN4AXDE3xLeZajm0yGnAxpd8CGwb4FhgipcDVE wwazu76IYBPpP40AyBALO10aXhYjrI+Bj6zI+Ug3msXLhkeICtI= =WEmw -----END PGP SIGNATURE-----

. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202208-39


                                       https://security.gentoo.org/

Severity: High Title: WebKitGTK+: Multiple Vulnerabilities Date: August 31, 2022 Bugs: #866494, #864427, #856445, #861740, #837305, #845252, #839984, #833568, #832990 ID: 202208-39


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in the arbitrary execution of code.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.36.7 >= 2.36.7

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebKitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.36.7"

References

[ 1 ] CVE-2022-2294 https://nvd.nist.gov/vuln/detail/CVE-2022-2294 [ 2 ] CVE-2022-22589 https://nvd.nist.gov/vuln/detail/CVE-2022-22589 [ 3 ] CVE-2022-22590 https://nvd.nist.gov/vuln/detail/CVE-2022-22590 [ 4 ] CVE-2022-22592 https://nvd.nist.gov/vuln/detail/CVE-2022-22592 [ 5 ] CVE-2022-22620 https://nvd.nist.gov/vuln/detail/CVE-2022-22620 [ 6 ] CVE-2022-22624 https://nvd.nist.gov/vuln/detail/CVE-2022-22624 [ 7 ] CVE-2022-22628 https://nvd.nist.gov/vuln/detail/CVE-2022-22628 [ 8 ] CVE-2022-22629 https://nvd.nist.gov/vuln/detail/CVE-2022-22629 [ 9 ] CVE-2022-22662 https://nvd.nist.gov/vuln/detail/CVE-2022-22662 [ 10 ] CVE-2022-22677 https://nvd.nist.gov/vuln/detail/CVE-2022-22677 [ 11 ] CVE-2022-26700 https://nvd.nist.gov/vuln/detail/CVE-2022-26700 [ 12 ] CVE-2022-26709 https://nvd.nist.gov/vuln/detail/CVE-2022-26709 [ 13 ] CVE-2022-26710 https://nvd.nist.gov/vuln/detail/CVE-2022-26710 [ 14 ] CVE-2022-26716 https://nvd.nist.gov/vuln/detail/CVE-2022-26716 [ 15 ] CVE-2022-26717 https://nvd.nist.gov/vuln/detail/CVE-2022-26717 [ 16 ] CVE-2022-26719 https://nvd.nist.gov/vuln/detail/CVE-2022-26719 [ 17 ] CVE-2022-30293 https://nvd.nist.gov/vuln/detail/CVE-2022-30293 [ 18 ] CVE-2022-30294 https://nvd.nist.gov/vuln/detail/CVE-2022-30294 [ 19 ] CVE-2022-32784 https://nvd.nist.gov/vuln/detail/CVE-2022-32784 [ 20 ] CVE-2022-32792 https://nvd.nist.gov/vuln/detail/CVE-2022-32792 [ 21 ] CVE-2022-32893 https://nvd.nist.gov/vuln/detail/CVE-2022-32893 [ 22 ] WSA-2022-0002 https://webkitgtk.org/security/WSA-2022-0002.html [ 23 ] WSA-2022-0003 https://webkitgtk.org/security/WSA-2022-0003.html [ 24 ] WSA-2022-0007 https://webkitgtk.org/security/WSA-2022-0007.html [ 25 ] WSA-2022-0008 https://webkitgtk.org/security/WSA-2022-0008.html

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202208-39

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0304",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.4"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.0"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.2"
      },
      {
        "model": "iphone",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "tvos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": "8.4"
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      }
    ],
    "trust": 0.3
  },
  "cve": "CVE-2022-22592",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2022-22592",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "VHN-411220",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-22592",
            "impactScore": 3.6,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.5,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2022-22592",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:H/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-22592",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-22592",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202201-2421",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-411220",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-22592",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A logic issue was addressed with improved state management. This issue is fixed in iOS 15.3 and iPadOS 15.3, watchOS 8.4, tvOS 15.3, Safari 15.3, macOS Monterey 12.2. Processing maliciously crafted web content may prevent Content Security Policy from being enforced. plural Apple There are unspecified vulnerabilities in the product.Information may be tampered with. (CVE-2020-27918)\n\"Clear History and Website Data\" did not clear the history. A user may be unable to fully delete browsing history. This issue is fixed in macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave. (CVE-2021-1789)\nA port redirection issue was found in WebKitGTK and WPE WebKit in versions prior to 2.30.6. A malicious website may be able to access restricted ports on arbitrary servers. The highest threat from this vulnerability is to data integrity. 14610.4.3.1.7 and 15610.4.3.1.7), watchOS 7.3.2, macOS Big Sur 11.2.3. A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-1870)\nA use-after-free vulnerability exists in the way certain events are processed for ImageLoader objects of Webkit WebKitGTK 2.30.4. A specially crafted web page can lead to a potential information leak and further memory corruption. In order to trigger the vulnerability, a victim must be tricked into visiting a malicious webpage. (CVE-2021-21775)\nA use-after-free vulnerability exists in the way Webkit\u0027s GraphicsContext handles certain events in WebKitGTK 2.30.4. A specially crafted web page can lead to a potential information leak and further memory corruption. A victim must be tricked into visiting a malicious web page to trigger this vulnerability. (CVE-2021-21779)\nAn exploitable use-after-free vulnerability exists in WebKitGTK browser version 2.30.3 x64. A specially crafted HTML web page can cause a use-after-free condition, resulting in remote code execution. The victim needs to visit a malicious web site to trigger the vulnerability. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. A malicious application may be able to leak sensitive user information. A malicious website may be able to access restricted ports on arbitrary servers. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30799)\nA use-after-free flaw was found in WebKitGTK. (CVE-2021-30809)\nA confusion type flaw was found in WebKitGTK. (CVE-2021-30818)\nAn out-of-bounds read flaw was found in WebKitGTK. A specially crafted audio file could use this flaw to trigger a disclosure of memory when processed. (CVE-2021-30887)\nAn information leak flaw was found in WebKitGTK. (CVE-2021-30888)\nA buffer overflow flaw was found in WebKitGTK. (CVE-2021-30984)\n** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was in a CNA pool that was not assigned to any issues during 2021. Notes: none. (CVE-2021-32912)\nBubblewrapLauncher.cpp in WebKitGTK and WPE WebKit prior to 2.34.1 allows a limited sandbox bypass that allows a sandboxed process to trick host processes into thinking the sandboxed process is not confined by the sandbox, by abusing VFS syscalls that manipulate its filesystem namespace. The impact is limited to host services that create UNIX sockets that WebKit mounts inside its sandbox, and the sandboxed process remains otherwise confined. NOTE: this is similar to CVE-2021-41133. (CVE-2021-42762)\nA segmentation violation vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45481)\nA use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45482)\nA use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. Video self-preview in a webRTC call may be interrupted if the user answers a phone call. (CVE-2022-26719)\nIn WebKitGTK up to and including 2.36.0 (and WPE WebKit), there is a heap-based buffer overflow in WebCore::TextureMapperLayer::setContentsLayer in WebCore/platform/graphics/texmap/TextureMapperLayer.cpp. (CVE-2022-32792)\nMultiple out-of-bounds write issues were addressed with improved bounds checking. An app may be able to disclose kernel memory. Visiting a website that frames malicious content may lead to UI spoofing. Visiting a malicious website may lead to user interface spoofing. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. (CVE-2022-46700)\nA flaw was found in the WebKitGTK package. An improper input validation issue may lead to a use-after-free vulnerability. This flaw allows attackers with network access to pass specially crafted web content files, causing a denial of service or arbitrary code execution. This may, in theory, allow a remote malicious user to create a specially crafted web page, trick the victim into opening it, trigger type confusion, and execute arbitrary code on the target system. (CVE-2023-23529)\nA use-after-free vulnerability in WebCore::RenderLayer::addChild in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25358)\nA use-after-free vulnerability in WebCore::RenderLayer::renderer in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25360)\nA use-after-free vulnerability in WebCore::RenderLayer::setNextSibling in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25361)\nA use-after-free vulnerability in WebCore::RenderLayer::repaintBlockSelectionGaps in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25362)\nA use-after-free vulnerability in WebCore::RenderLayer::updateDescendantDependentFlags in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25363)\nThe vulnerability allows a remote malicious user to bypass Same Origin Policy restrictions. (CVE-2023-27932)\nThe vulnerability exists due to excessive data output by the application. A remote attacker can track sensitive user information. Apple is aware of a report that this issue may have been actively exploited. (CVE-2023-32373)\nN/A (CVE-2023-32409). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: webkit2gtk3 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2022:1777-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1777\nIssue date:        2022-05-10\nCVE Names:         CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 \n                   CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 \n                   CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 \n                   CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 \n                   CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 \n                   CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 \n                   CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 \n                   CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 \n                   CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 \n                   CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 \n=====================================================================\n\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nThe following packages have been upgraded to a later upstream version:\nwebkit2gtk3 (2.34.6). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.34.6-1.el8.src.rpm\n\naarch64:\nwebkit2gtk3-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-30809\nhttps://access.redhat.com/security/cve/CVE-2021-30818\nhttps://access.redhat.com/security/cve/CVE-2021-30823\nhttps://access.redhat.com/security/cve/CVE-2021-30836\nhttps://access.redhat.com/security/cve/CVE-2021-30846\nhttps://access.redhat.com/security/cve/CVE-2021-30848\nhttps://access.redhat.com/security/cve/CVE-2021-30849\nhttps://access.redhat.com/security/cve/CVE-2021-30851\nhttps://access.redhat.com/security/cve/CVE-2021-30884\nhttps://access.redhat.com/security/cve/CVE-2021-30887\nhttps://access.redhat.com/security/cve/CVE-2021-30888\nhttps://access.redhat.com/security/cve/CVE-2021-30889\nhttps://access.redhat.com/security/cve/CVE-2021-30890\nhttps://access.redhat.com/security/cve/CVE-2021-30897\nhttps://access.redhat.com/security/cve/CVE-2021-30934\nhttps://access.redhat.com/security/cve/CVE-2021-30936\nhttps://access.redhat.com/security/cve/CVE-2021-30951\nhttps://access.redhat.com/security/cve/CVE-2021-30952\nhttps://access.redhat.com/security/cve/CVE-2021-30953\nhttps://access.redhat.com/security/cve/CVE-2021-30954\nhttps://access.redhat.com/security/cve/CVE-2021-30984\nhttps://access.redhat.com/security/cve/CVE-2021-45481\nhttps://access.redhat.com/security/cve/CVE-2021-45482\nhttps://access.redhat.com/security/cve/CVE-2021-45483\nhttps://access.redhat.com/security/cve/CVE-2022-22589\nhttps://access.redhat.com/security/cve/CVE-2022-22590\nhttps://access.redhat.com/security/cve/CVE-2022-22592\nhttps://access.redhat.com/security/cve/CVE-2022-22594\nhttps://access.redhat.com/security/cve/CVE-2022-22620\nhttps://access.redhat.com/security/cve/CVE-2022-22637\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnqQrdzjgjWX9erEAQi/6BAAhaqaCDj0g7uJ6LdXEng5SqGBFl5g6GIV\np/WSKyL+tI3BpKaaUWr6+d4tNnaQbKxhRTwTSJa8GMrOc7n6Y7LO8Y7mQj3bEFvn\nz3HHLZK8EMgDUz4I0esuh0qNWnfsD/vJDuGbXlHLdNLlc5XzgX7YA6eIb1lxSbxV\nueSENHohbMJLbWoeI2gMUYGb5cAzBHrgdmFIsr4XUd6sr5Z1ZOPnQPf36vrXGdzj\nmPXPijZtr9QiPgwijm4/DkJG7NQ4KyaPMOKauC7PEB1AHWIwHteRnVxnWuZLjpMf\nRqYBQu2pYeTiyGky+ozshJ81mdfLyUQBR/+4KbB2TMFZHDlhxzNFZrErh4+dfQja\nCuf+IwTOSZgC/8XouTQMA27KFSYKd4PzwnB3yQeGU0NA/VngYp12BegeVHlDiadS\nhO+mAk/BAAesdywt7ZTU1e1yROLm/jp0VcmvkQO+gh2WhErEFV3s0qnsu1dfuLY7\nB1e0z6c/vp8lkSFs2fcx0Oq1S7nGIGZiR66loghp03nDoCcxblsxBcFV9CNq6yVG\nBkEAFzMb/AHxqn7KbZeN6g4Los+3Dr7eFYPGUkVEXy+AbHqE+b99pT2TIjCOMh/L\nwXOE+nX3KXbD5MCqvmF2K6w+MfIf3AxzzgirwXyLewSP8NKBmsdBtgwbgFam1QfM\nUqt+dghxtOQ=\n=LCNn\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nFor the oldstable distribution (buster), these problems have been fixed\nin version 2.34.6-1~deb10u1. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.34.6-1~deb11u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". \n\nInstallation note:\n\nApple TV will periodically check for software updates. \n\nAMD Kernel\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22584: Mickey Jin (@patch1t) of Trend Micro\n\nCrash Reporter\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22578: an anonymous researcher\n\niCloud\nAvailable for: macOS Monterey\nImpact: An application may be able to access a user\u0027s files\nDescription: An issue existed within the path validation logic for\nsymlinks. \nCVE-2022-22591: Antonio Zekic (@antoniozekic) of Diverto\n\nIOMobileFrameBuffer\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges. \nCVE-2022-22587: an anonymous researcher, Meysam Firouzi (@R00tkitSMM)\nof MBition - Mercedes-Benz Innovation Lab, Siddharth Aeri\n(@b1n4r1b01)\n\nKernel\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2022-22579: Mickey Jin (@patch1t) of Trend Micro\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An application may be able to access restricted files\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2022-22583: an anonymous researcher, Mickey Jin (@patch1t), Ron\nHass (@ronhass7) of Perception Point\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted mail message may lead to\nrunning arbitrary javascript\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2022-22592: Prakash (@1lastBr3ath)\n\nWebKit Storage\nAvailable for: macOS Monterey\nImpact: A website may be able to track sensitive user information\nDescription: A cross-origin issue in the IndexDB API was addressed\nwith improved input validation. \nCVE-2022-22594: Martin Bajanik of FingerprintJS\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Tao Huang for their assistance. \n\nMetal\nWe would like to acknowledge Tao Huang for their assistance. \n\nPackageKit\nWe would like to acknowledge Mickey Jin (@patch1t), Mickey Jin\n(@patch1t) of Trend Micro for their assistance. \n\nWebKit\nWe would like to acknowledge Prakash (@1lastBr3ath) for their\nassistance. \n\nInstallation note:\n\nThis update may be obtained from the Mac App Store\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmHx0zEACgkQeC9qKD1p\nrhhFUw/+K0ImMkw8zCjqdJza05Y6mlSa2wAdoVQK4pywYylcWMzemOi9GStr1Tgq\nCmA7KFo2hPp/9kh6+SURi91WwUKdNHsDfasjNDqTTOJalQFuKB0erZgEpcprBnHE\nlzT2heSJDl58sMSn0hLlGIhLfc+4Ld29FKc3lmtBPGeKY3vUViHLN0s3sZj07twx\nEw7yYkDBkz/e2kGRDByWzmvsSQt+w7HMK+pN1m5CBTZdP8KAHVtbuv8BPtMHKwNJ\n1Kzo6nW4MJ9Eds63Lz4A37nqTNxmvsbf4zDSppwAp8NalEHqg5My7PzmK97eh6ap\njS4P4LqdRigTvRMq3eDVh/4Lie+/39nXwdQI6czETvTYzi+iA6k3q1Lsf2eIYzCf\n0y4YTKEwIze05Q45YqbbnRDfVGOKtfZOcFVYsMxYBHBMp6LDcLJ9i0+AORX2igoA\ndODLICbrHzexa682FDE2RGtgQOtS5k4LJLUggvSeOW/tXN+MovfVTjCIzJSKYltP\neQm8gq3EajaRk4JQcYkxklalyOHVZpkg3+u6Az+xIY5nVVijkuGDqqiCoh62zmsE\nkSXZrfuJTIYWbIzpR23xVMyxWBcN4AXDE3xLeZajm0yGnAxpd8CGwb4FhgipcDVE\nwwazu76IYBPpP40AyBALO10aXhYjrI+Bj6zI+Ug3msXLhkeICtI=\n=WEmw\n-----END PGP SIGNATURE-----\n\n\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202208-39\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebKitGTK+: Multiple Vulnerabilities\n     Date: August 31, 2022\n     Bugs: #866494, #864427, #856445, #861740, #837305, #845252, #839984, #833568, #832990\n       ID: 202208-39\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk        \u003c 2.36.7                    \u003e= 2.36.7\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebKitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.36.7\"\n\nReferences\n=========\n[ 1 ] CVE-2022-2294\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2294\n[ 2 ] CVE-2022-22589\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22589\n[ 3 ] CVE-2022-22590\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22590\n[ 4 ] CVE-2022-22592\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22592\n[ 5 ] CVE-2022-22620\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22620\n[ 6 ] CVE-2022-22624\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22624\n[ 7 ] CVE-2022-22628\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22628\n[ 8 ] CVE-2022-22629\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22629\n[ 9 ] CVE-2022-22662\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22662\n[ 10 ] CVE-2022-22677\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22677\n[ 11 ] CVE-2022-26700\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26700\n[ 12 ] CVE-2022-26709\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26709\n[ 13 ] CVE-2022-26710\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26710\n[ 14 ] CVE-2022-26716\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26716\n[ 15 ] CVE-2022-26717\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26717\n[ 16 ] CVE-2022-26719\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26719\n[ 17 ] CVE-2022-30293\n      https://nvd.nist.gov/vuln/detail/CVE-2022-30293\n[ 18 ] CVE-2022-30294\n      https://nvd.nist.gov/vuln/detail/CVE-2022-30294\n[ 19 ] CVE-2022-32784\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32784\n[ 20 ] CVE-2022-32792\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32792\n[ 21 ] CVE-2022-32893\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32893\n[ 22 ] WSA-2022-0002\n      https://webkitgtk.org/security/WSA-2022-0002.html\n[ 23 ] WSA-2022-0003\n      https://webkitgtk.org/security/WSA-2022-0003.html\n[ 24 ] WSA-2022-0007\n      https://webkitgtk.org/security/WSA-2022-0007.html\n[ 25 ] WSA-2022-0008\n      https://webkitgtk.org/security/WSA-2022-0008.html\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202208-39\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169237"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      }
    ],
    "trust": 2.43
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-411220",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-22592",
        "trust": 4.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168226",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "165777",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2022022120",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022012637",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022020932",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022051140",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0844",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0409",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0724",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0899",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165772",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165775",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165776",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165771",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-411220",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22592",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167037",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169237",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169229",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169237"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "id": "VAR-202201-0304",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:07:07.954000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT213059",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT213053"
      },
      {
        "title": "Apple Safari Fixing measures for security feature vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=182130"
      },
      {
        "title": "Debian Security Advisories: DSA-5083-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=1e1726cb3c6d9dabbfb6d6be4668f64f"
      },
      {
        "title": "Debian Security Advisories: DSA-5084-1 wpewebkit -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=fad7bdb7356c54203c2fb7db9019fb4f"
      },
      {
        "title": "Apple: iOS 15.3 and iPadOS 15.3",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=05e71c916b30e0c013cc3ece80cc9189"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2088",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2088"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/Live-Hack-CVE/CVE-2022-22592 "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      },
      {
        "problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://support.apple.com/en-us/ht213058"
      },
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202208-39"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213053"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213054"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213057"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213059"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22592"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-22592"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22590"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022022120"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168226/gentoo-linux-security-advisory-202208-39.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022020932"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-22592/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-macos-multiple-vulnerabilities-37394"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165777/apple-security-advisory-2022-01-26-7.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022012637"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022051140"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0409"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-three-vulnerabilities-37548"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0724"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0844"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0899"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22620"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22584"
      },
      {
        "trust": 0.3,
        "url": "https://xlab.tencent.com)"
      },
      {
        "trust": 0.3,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22594"
      },
      {
        "trust": 0.3,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22593"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22585"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22578"
      },
      {
        "trust": 0.2,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.2,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22579"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/live-hack-cve/cve-2022-22592"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/2022/dsa-5083"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2023-2088.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22637"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22620"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/wpewebkit"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213059."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213057."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22586"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213054."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22591"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22587"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22677"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2294"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30293"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0008.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30294"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32893"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32792"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0003.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32784"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0007.html"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169237"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169237"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-03-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "date": "2022-03-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "date": "2022-05-11T15:50:41",
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "date": "2022-02-28T20:12:00",
        "db": "PACKETSTORM",
        "id": "169237"
      },
      {
        "date": "2022-02-28T20:12:00",
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "date": "2022-01-31T15:47:07",
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "date": "2022-01-31T15:46:53",
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "date": "2022-01-31T15:46:05",
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "date": "2022-09-01T16:33:44",
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "date": "2022-01-26T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "date": "2023-08-02T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "date": "2022-03-18T18:15:12.760000",
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-411220"
      },
      {
        "date": "2022-09-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-22592"
      },
      {
        "date": "2022-09-02T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      },
      {
        "date": "2023-08-02T06:50:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      },
      {
        "date": "2024-11-21T06:47:05.197000",
        "db": "NVD",
        "id": "CVE-2022-22592"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural \u00a0Apple\u00a0 Product vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-009000"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "security feature problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2421"
      }
    ],
    "trust": 0.6
  }
}

VAR-202201-0567

Vulnerability from variot - Updated: 2025-12-22 23:04

A use after free issue was addressed with improved memory management. This issue is fixed in iOS 15.3 and iPadOS 15.3, watchOS 8.4, tvOS 15.3, Safari 15.3, macOS Monterey 12.2. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-27918) "Clear History and Website Data" did not clear the history. A user may be unable to fully delete browsing history. This issue is fixed in macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave. (CVE-2021-1789) A port redirection issue was found in WebKitGTK and WPE WebKit in versions prior to 2.30.6. A malicious website may be able to access restricted ports on arbitrary servers. The highest threat from this vulnerability is to data integrity. 14610.4.3.1.7 and 15610.4.3.1.7), watchOS 7.3.2, macOS Big Sur 11.2.3. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-1870) A use-after-free vulnerability exists in the way certain events are processed for ImageLoader objects of Webkit WebKitGTK 2.30.4. In order to trigger the vulnerability, a victim must be tricked into visiting a malicious webpage. (CVE-2021-21775) A use-after-free vulnerability exists in the way Webkit's GraphicsContext handles certain events in WebKitGTK 2.30.4. A victim must be tricked into visiting a malicious web page to trigger this vulnerability. (CVE-2021-21779) An exploitable use-after-free vulnerability exists in WebKitGTK browser version 2.30.3 x64. The victim needs to visit a malicious web site to trigger the vulnerability. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30661) An integer overflow was addressed with improved input validation. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. A malicious application may be able to leak sensitive user information. A malicious website may be able to access restricted ports on arbitrary servers. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30799) A use-after-free flaw was found in WebKitGTK. (CVE-2021-30809) A confusion type flaw was found in WebKitGTK. (CVE-2021-30818) An out-of-bounds read flaw was found in WebKitGTK. A specially crafted audio file could use this flaw to trigger a disclosure of memory when processed. (CVE-2021-30887) An information leak flaw was found in WebKitGTK. A malicious web site using Content Security Policy reports could use this flaw to leak information via redirects. (CVE-2021-30888) A buffer overflow flaw was found in WebKitGTK. (CVE-2021-30951) An integer overflow was addressed with improved input validation. (CVE-2021-30952) An out-of-bounds read was addressed with improved bounds checking. (CVE-2021-30984) ** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was in a CNA pool that was not assigned to any issues during 2021. Notes: none. (CVE-2021-32912) BubblewrapLauncher.cpp in WebKitGTK and WPE WebKit prior to 2.34.1 allows a limited sandbox bypass that allows a sandboxed process to trick host processes into thinking the sandboxed process is not confined by the sandbox, by abusing VFS syscalls that manipulate its filesystem namespace. The impact is limited to host services that create UNIX sockets that WebKit mounts inside its sandbox, and the sandboxed process remains otherwise confined. NOTE: this is similar to CVE-2021-41133. (CVE-2021-42762) A segmentation violation vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45481) A use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45482) A use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. Video self-preview in a webRTC call may be interrupted if the user answers a phone call. (CVE-2022-26719) In WebKitGTK up to and including 2.36.0 (and WPE WebKit), there is a heap-based buffer overflow in WebCore::TextureMapperLayer::setContentsLayer in WebCore/platform/graphics/texmap/TextureMapperLayer.cpp. An app may be able to disclose kernel memory. Visiting a website that frames malicious content may lead to UI spoofing. Visiting a malicious website may lead to user interface spoofing. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. (CVE-2022-46700) A flaw was found in the WebKitGTK package. An improper input validation issue may lead to a use-after-free vulnerability. This may, in theory, allow a remote malicious user to create a specially crafted web page, trick the victim into opening it, trigger type confusion, and execute arbitrary code on the target system. (CVE-2023-23529) A use-after-free vulnerability in WebCore::RenderLayer::addChild in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25358) A use-after-free vulnerability in WebCore::RenderLayer::renderer in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25360) A use-after-free vulnerability in WebCore::RenderLayer::setNextSibling in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25361) A use-after-free vulnerability in WebCore::RenderLayer::repaintBlockSelectionGaps in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25362) A use-after-free vulnerability in WebCore::RenderLayer::updateDescendantDependentFlags in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25363) The vulnerability allows a remote malicious user to bypass Same Origin Policy restrictions. (CVE-2023-27932) The vulnerability exists due to excessive data output by the application. A remote attacker can track sensitive user information. (CVE-2023-27954) An out-of-bounds read issue in WebKit that could be abused to disclose sensitive information when processing web content. Apple is aware of a report that this issue may have been actively exploited. (CVE-2023-32373) N/A (CVE-2023-32409). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: webkit2gtk3 security, bug fix, and enhancement update Advisory ID: RHSA-2022:1777-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1777 Issue date: 2022-05-10 CVE Names: CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 =====================================================================

  1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

The following packages have been upgraded to a later upstream version: webkit2gtk3 (2.34.6).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.34.6-1.el8.src.rpm

aarch64: webkit2gtk3-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm

ppc64le: webkit2gtk3-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm

s390x: webkit2gtk3-2.34.6-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm

x86_64: webkit2gtk3-2.34.6-1.el8.i686.rpm webkit2gtk3-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm webkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-30809 https://access.redhat.com/security/cve/CVE-2021-30818 https://access.redhat.com/security/cve/CVE-2021-30823 https://access.redhat.com/security/cve/CVE-2021-30836 https://access.redhat.com/security/cve/CVE-2021-30846 https://access.redhat.com/security/cve/CVE-2021-30848 https://access.redhat.com/security/cve/CVE-2021-30849 https://access.redhat.com/security/cve/CVE-2021-30851 https://access.redhat.com/security/cve/CVE-2021-30884 https://access.redhat.com/security/cve/CVE-2021-30887 https://access.redhat.com/security/cve/CVE-2021-30888 https://access.redhat.com/security/cve/CVE-2021-30889 https://access.redhat.com/security/cve/CVE-2021-30890 https://access.redhat.com/security/cve/CVE-2021-30897 https://access.redhat.com/security/cve/CVE-2021-30934 https://access.redhat.com/security/cve/CVE-2021-30936 https://access.redhat.com/security/cve/CVE-2021-30951 https://access.redhat.com/security/cve/CVE-2021-30952 https://access.redhat.com/security/cve/CVE-2021-30953 https://access.redhat.com/security/cve/CVE-2021-30954 https://access.redhat.com/security/cve/CVE-2021-30984 https://access.redhat.com/security/cve/CVE-2021-45481 https://access.redhat.com/security/cve/CVE-2021-45482 https://access.redhat.com/security/cve/CVE-2021-45483 https://access.redhat.com/security/cve/CVE-2022-22589 https://access.redhat.com/security/cve/CVE-2022-22590 https://access.redhat.com/security/cve/CVE-2022-22592 https://access.redhat.com/security/cve/CVE-2022-22594 https://access.redhat.com/security/cve/CVE-2022-22620 https://access.redhat.com/security/cve/CVE-2022-22637 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYnqQrdzjgjWX9erEAQi/6BAAhaqaCDj0g7uJ6LdXEng5SqGBFl5g6GIV p/WSKyL+tI3BpKaaUWr6+d4tNnaQbKxhRTwTSJa8GMrOc7n6Y7LO8Y7mQj3bEFvn z3HHLZK8EMgDUz4I0esuh0qNWnfsD/vJDuGbXlHLdNLlc5XzgX7YA6eIb1lxSbxV ueSENHohbMJLbWoeI2gMUYGb5cAzBHrgdmFIsr4XUd6sr5Z1ZOPnQPf36vrXGdzj mPXPijZtr9QiPgwijm4/DkJG7NQ4KyaPMOKauC7PEB1AHWIwHteRnVxnWuZLjpMf RqYBQu2pYeTiyGky+ozshJ81mdfLyUQBR/+4KbB2TMFZHDlhxzNFZrErh4+dfQja Cuf+IwTOSZgC/8XouTQMA27KFSYKd4PzwnB3yQeGU0NA/VngYp12BegeVHlDiadS hO+mAk/BAAesdywt7ZTU1e1yROLm/jp0VcmvkQO+gh2WhErEFV3s0qnsu1dfuLY7 B1e0z6c/vp8lkSFs2fcx0Oq1S7nGIGZiR66loghp03nDoCcxblsxBcFV9CNq6yVG BkEAFzMb/AHxqn7KbZeN6g4Los+3Dr7eFYPGUkVEXy+AbHqE+b99pT2TIjCOMh/L wXOE+nX3KXbD5MCqvmF2K6w+MfIf3AxzzgirwXyLewSP8NKBmsdBtgwbgFam1QfM Uqt+dghxtOQ= =LCNn -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

For the oldstable distribution (buster), these problems have been fixed in version 2.34.6-1~deb10u1.

For the stable distribution (bullseye), these problems have been fixed in version 2.34.6-1~deb11u1.

We recommend that you upgrade your webkit2gtk packages.

Alternatively, on your watch, select "My Watch > General > About". CVE-2022-22591: Antonio Zekic (@antoniozekic) of Diverto

IOMobileFrameBuffer Available for: macOS Monterey Impact: A malicious application may be able to execute arbitrary code with kernel privileges.

PackageKit We would like to acknowledge Mickey Jin (@patch1t), Mickey Jin (@patch1t) of Trend Micro for their assistance. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2022-01-26-1 iOS 15.3 and iPadOS 15.3

iOS 15.3 and iPadOS 15.3 addresses the following issues.

ColorSync Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved validation. CVE-2022-22584: Mickey Jin (@patch1t) of Trend Micro

Crash Reporter Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22578: an anonymous researcher

iCloud Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to access a user's files Description: An issue existed within the path validation logic for symlinks. CVE-2022-22585: Zhipeng Huo (@R3dF09) of Tencent Security Xuanwu Lab (https://xlab.tencent.com)

IOMobileFrameBuffer Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges. CVE-2022-22587: an anonymous researcher, Meysam Firouzi (@R00tkitSMM) of MBition - Mercedes-Benz Innovation Lab, Siddharth Aeri (@b1n4r1b01)

Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A buffer overflow issue was addressed with improved memory handling. CVE-2022-22593: Peter Nguyễn Vũ Hoàng of STAR Labs

Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted STL file may lead to unexpected application termination or arbitrary code execution Description: An information disclosure issue was addressed with improved state management. CVE-2022-22579: Mickey Jin (@patch1t) of Trend Micro

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted mail message may lead to running arbitrary javascript Description: A validation issue was addressed with improved input sanitization. CVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu of Palo Alto Networks (paloaltonetworks.com)

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2022-22590: Toan Pham from Team Orca of Sea Security (security.sea.com)

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may prevent Content Security Policy from being enforced Description: A logic issue was addressed with improved state management. CVE-2022-22592: Prakash (@1lastBr3ath)

WebKit Storage Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A website may be able to track sensitive user information Description: A cross-origin issue in the IndexDB API was addressed with improved input validation. CVE-2022-22594: Martin Bajanik of FingerprintJS

Additional recognition

WebKit We would like to acknowledge Prakash (@1lastBr3ath) for their assistance.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be “15.3"

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmHx0vIACgkQeC9qKD1p rhj4hBAAuITBqrZx38zr9+MFgchRltErtLD/ZVUZ0mYD/bbVaF8+2RwoP0d3XBHj hkiZAqV8LCe+r9qs2SHBxXZteEEs79R1AIzZCSAjMU9LUK8yXYgHC5BGDoanRmes yuFyrWp78zz5Ix3jop5SUTt0xcxSOK49m7Oozrgr4sfzDg83VzDF9ebna+Obcar1 WArT/yhPC35dwJ5tOJ0Xmdogb3gPEk+ccjw885UpjnQqnkX8g0KOUzRSp/BYwexM vea9a7z3IGrCHaU8rlJWX+GupMUgRtpZr/k6jCzwT7g4BDRYSMYFvJcKZF6xFNgy raxl8Vdm+ZhTK//YNFl7BB1aKixVzI6i85aegtOErUPRwzICD1NDlQK5q3ErBpp+ 5FTvuwn7SWy5BPkSIOwmfoJfGrWTzDmdOAajM5o6Yy5m/OnR5ZqK4egfvwmPjoEy lx9ffhcvm7HbQmLjO4DTQlpqiyk3UmMmE5MEG4QSMA5UOqMinjE0kl+2JEkV7cmt Ugkcc4Auu7jUM3YxCkPfMi/x4/t52BBJbIXzpLnj2qebpci7GW9c3aDPNoQbTty9 +Y1amSmQvVRlqKGEi2xlVKGqN0uduhanyiL6+tt2Q1Afo/jf6JjERVUrOGl/Fv7r sJKt1GE0w3uJ6RQVQ6C3w33HTmzNWwzfdy+I8Ik3Cn8ZgfHY3JA= =JRMz -----END PGP SIGNATURE-----

. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202208-39


                                       https://security.gentoo.org/

Severity: High Title: WebKitGTK+: Multiple Vulnerabilities Date: August 31, 2022 Bugs: #866494, #864427, #856445, #861740, #837305, #845252, #839984, #833568, #832990 ID: 202208-39


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in the arbitrary execution of code.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.36.7 >= 2.36.7

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebKitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.36.7"

References

[ 1 ] CVE-2022-2294 https://nvd.nist.gov/vuln/detail/CVE-2022-2294 [ 2 ] CVE-2022-22589 https://nvd.nist.gov/vuln/detail/CVE-2022-22589 [ 3 ] CVE-2022-22590 https://nvd.nist.gov/vuln/detail/CVE-2022-22590 [ 4 ] CVE-2022-22592 https://nvd.nist.gov/vuln/detail/CVE-2022-22592 [ 5 ] CVE-2022-22620 https://nvd.nist.gov/vuln/detail/CVE-2022-22620 [ 6 ] CVE-2022-22624 https://nvd.nist.gov/vuln/detail/CVE-2022-22624 [ 7 ] CVE-2022-22628 https://nvd.nist.gov/vuln/detail/CVE-2022-22628 [ 8 ] CVE-2022-22629 https://nvd.nist.gov/vuln/detail/CVE-2022-22629 [ 9 ] CVE-2022-22662 https://nvd.nist.gov/vuln/detail/CVE-2022-22662 [ 10 ] CVE-2022-22677 https://nvd.nist.gov/vuln/detail/CVE-2022-22677 [ 11 ] CVE-2022-26700 https://nvd.nist.gov/vuln/detail/CVE-2022-26700 [ 12 ] CVE-2022-26709 https://nvd.nist.gov/vuln/detail/CVE-2022-26709 [ 13 ] CVE-2022-26710 https://nvd.nist.gov/vuln/detail/CVE-2022-26710 [ 14 ] CVE-2022-26716 https://nvd.nist.gov/vuln/detail/CVE-2022-26716 [ 15 ] CVE-2022-26717 https://nvd.nist.gov/vuln/detail/CVE-2022-26717 [ 16 ] CVE-2022-26719 https://nvd.nist.gov/vuln/detail/CVE-2022-26719 [ 17 ] CVE-2022-30293 https://nvd.nist.gov/vuln/detail/CVE-2022-30293 [ 18 ] CVE-2022-30294 https://nvd.nist.gov/vuln/detail/CVE-2022-30294 [ 19 ] CVE-2022-32784 https://nvd.nist.gov/vuln/detail/CVE-2022-32784 [ 20 ] CVE-2022-32792 https://nvd.nist.gov/vuln/detail/CVE-2022-32792 [ 21 ] CVE-2022-32893 https://nvd.nist.gov/vuln/detail/CVE-2022-32893 [ 22 ] WSA-2022-0002 https://webkitgtk.org/security/WSA-2022-0002.html [ 23 ] WSA-2022-0003 https://webkitgtk.org/security/WSA-2022-0003.html [ 24 ] WSA-2022-0007 https://webkitgtk.org/security/WSA-2022-0007.html [ 25 ] WSA-2022-0008 https://webkitgtk.org/security/WSA-2022-0008.html

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202208-39

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0567",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.4"
      },
      {
        "model": "webkitgtk",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "webkitgtk",
        "version": "2.36.7"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.0"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.2"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.4"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.3"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "165777"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "165771"
      }
    ],
    "trust": 0.5
  },
  "cve": "CVE-2022-22590",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2022-22590",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-411218",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-22590",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-22590",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202201-2424",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-411218",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-22590",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A use after free issue was addressed with improved memory management. This issue is fixed in iOS 15.3 and iPadOS 15.3, watchOS 8.4, tvOS 15.3, Safari 15.3, macOS Monterey 12.2. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-27918)\n\"Clear History and Website Data\" did not clear the history. A user may be unable to fully delete browsing history. This issue is fixed in macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave. (CVE-2021-1789)\nA port redirection issue was found in WebKitGTK and WPE WebKit in versions prior to 2.30.6. A malicious website may be able to access restricted ports on arbitrary servers. The highest threat from this vulnerability is to data integrity. 14610.4.3.1.7 and 15610.4.3.1.7), watchOS 7.3.2, macOS Big Sur 11.2.3. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-1870)\nA use-after-free vulnerability exists in the way certain events are processed for ImageLoader objects of Webkit WebKitGTK 2.30.4. In order to trigger the vulnerability, a victim must be tricked into visiting a malicious webpage. (CVE-2021-21775)\nA use-after-free vulnerability exists in the way Webkit\u0027s GraphicsContext handles certain events in WebKitGTK 2.30.4. A victim must be tricked into visiting a malicious web page to trigger this vulnerability. (CVE-2021-21779)\nAn exploitable use-after-free vulnerability exists in WebKitGTK browser version 2.30.3 x64. The victim needs to visit a malicious web site to trigger the vulnerability. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30661)\nAn integer overflow was addressed with improved input validation. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. A malicious application may be able to leak sensitive user information. A malicious website may be able to access restricted ports on arbitrary servers. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30799)\nA use-after-free flaw was found in WebKitGTK. (CVE-2021-30809)\nA confusion type flaw was found in WebKitGTK. (CVE-2021-30818)\nAn out-of-bounds read flaw was found in WebKitGTK. A specially crafted audio file could use this flaw to trigger a disclosure of memory when processed. (CVE-2021-30887)\nAn information leak flaw was found in WebKitGTK. A malicious web site using Content Security Policy reports could use this flaw to leak information via redirects. (CVE-2021-30888)\nA buffer overflow flaw was found in WebKitGTK. (CVE-2021-30951)\nAn integer overflow was addressed with improved input validation. (CVE-2021-30952)\nAn out-of-bounds read was addressed with improved bounds checking. (CVE-2021-30984)\n** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was in a CNA pool that was not assigned to any issues during 2021. Notes: none. (CVE-2021-32912)\nBubblewrapLauncher.cpp in WebKitGTK and WPE WebKit prior to 2.34.1 allows a limited sandbox bypass that allows a sandboxed process to trick host processes into thinking the sandboxed process is not confined by the sandbox, by abusing VFS syscalls that manipulate its filesystem namespace. The impact is limited to host services that create UNIX sockets that WebKit mounts inside its sandbox, and the sandboxed process remains otherwise confined. NOTE: this is similar to CVE-2021-41133. (CVE-2021-42762)\nA segmentation violation vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45481)\nA use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45482)\nA use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. Video self-preview in a webRTC call may be interrupted if the user answers a phone call. (CVE-2022-26719)\nIn WebKitGTK up to and including 2.36.0 (and WPE WebKit), there is a heap-based buffer overflow in WebCore::TextureMapperLayer::setContentsLayer in WebCore/platform/graphics/texmap/TextureMapperLayer.cpp. An app may be able to disclose kernel memory. Visiting a website that frames malicious content may lead to UI spoofing. Visiting a malicious website may lead to user interface spoofing. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. (CVE-2022-46700)\nA flaw was found in the WebKitGTK package. An improper input validation issue may lead to a use-after-free vulnerability. This may, in theory, allow a remote malicious user to create a specially crafted web page, trick the victim into opening it, trigger type confusion, and execute arbitrary code on the target system. (CVE-2023-23529)\nA use-after-free vulnerability in WebCore::RenderLayer::addChild in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25358)\nA use-after-free vulnerability in WebCore::RenderLayer::renderer in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25360)\nA use-after-free vulnerability in WebCore::RenderLayer::setNextSibling in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25361)\nA use-after-free vulnerability in WebCore::RenderLayer::repaintBlockSelectionGaps in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25362)\nA use-after-free vulnerability in WebCore::RenderLayer::updateDescendantDependentFlags in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25363)\nThe vulnerability allows a remote malicious user to bypass Same Origin Policy restrictions. (CVE-2023-27932)\nThe vulnerability exists due to excessive data output by the application. A remote attacker can track sensitive user information. (CVE-2023-27954)\nAn out-of-bounds read issue in WebKit that could be abused to disclose sensitive information when processing web content. Apple is aware of a report that this issue may have been actively exploited. (CVE-2023-32373)\nN/A (CVE-2023-32409). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: webkit2gtk3 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2022:1777-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1777\nIssue date:        2022-05-10\nCVE Names:         CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 \n                   CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 \n                   CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 \n                   CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 \n                   CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 \n                   CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 \n                   CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 \n                   CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 \n                   CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 \n                   CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 \n=====================================================================\n\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nThe following packages have been upgraded to a later upstream version:\nwebkit2gtk3 (2.34.6). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.34.6-1.el8.src.rpm\n\naarch64:\nwebkit2gtk3-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-30809\nhttps://access.redhat.com/security/cve/CVE-2021-30818\nhttps://access.redhat.com/security/cve/CVE-2021-30823\nhttps://access.redhat.com/security/cve/CVE-2021-30836\nhttps://access.redhat.com/security/cve/CVE-2021-30846\nhttps://access.redhat.com/security/cve/CVE-2021-30848\nhttps://access.redhat.com/security/cve/CVE-2021-30849\nhttps://access.redhat.com/security/cve/CVE-2021-30851\nhttps://access.redhat.com/security/cve/CVE-2021-30884\nhttps://access.redhat.com/security/cve/CVE-2021-30887\nhttps://access.redhat.com/security/cve/CVE-2021-30888\nhttps://access.redhat.com/security/cve/CVE-2021-30889\nhttps://access.redhat.com/security/cve/CVE-2021-30890\nhttps://access.redhat.com/security/cve/CVE-2021-30897\nhttps://access.redhat.com/security/cve/CVE-2021-30934\nhttps://access.redhat.com/security/cve/CVE-2021-30936\nhttps://access.redhat.com/security/cve/CVE-2021-30951\nhttps://access.redhat.com/security/cve/CVE-2021-30952\nhttps://access.redhat.com/security/cve/CVE-2021-30953\nhttps://access.redhat.com/security/cve/CVE-2021-30954\nhttps://access.redhat.com/security/cve/CVE-2021-30984\nhttps://access.redhat.com/security/cve/CVE-2021-45481\nhttps://access.redhat.com/security/cve/CVE-2021-45482\nhttps://access.redhat.com/security/cve/CVE-2021-45483\nhttps://access.redhat.com/security/cve/CVE-2022-22589\nhttps://access.redhat.com/security/cve/CVE-2022-22590\nhttps://access.redhat.com/security/cve/CVE-2022-22592\nhttps://access.redhat.com/security/cve/CVE-2022-22594\nhttps://access.redhat.com/security/cve/CVE-2022-22620\nhttps://access.redhat.com/security/cve/CVE-2022-22637\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnqQrdzjgjWX9erEAQi/6BAAhaqaCDj0g7uJ6LdXEng5SqGBFl5g6GIV\np/WSKyL+tI3BpKaaUWr6+d4tNnaQbKxhRTwTSJa8GMrOc7n6Y7LO8Y7mQj3bEFvn\nz3HHLZK8EMgDUz4I0esuh0qNWnfsD/vJDuGbXlHLdNLlc5XzgX7YA6eIb1lxSbxV\nueSENHohbMJLbWoeI2gMUYGb5cAzBHrgdmFIsr4XUd6sr5Z1ZOPnQPf36vrXGdzj\nmPXPijZtr9QiPgwijm4/DkJG7NQ4KyaPMOKauC7PEB1AHWIwHteRnVxnWuZLjpMf\nRqYBQu2pYeTiyGky+ozshJ81mdfLyUQBR/+4KbB2TMFZHDlhxzNFZrErh4+dfQja\nCuf+IwTOSZgC/8XouTQMA27KFSYKd4PzwnB3yQeGU0NA/VngYp12BegeVHlDiadS\nhO+mAk/BAAesdywt7ZTU1e1yROLm/jp0VcmvkQO+gh2WhErEFV3s0qnsu1dfuLY7\nB1e0z6c/vp8lkSFs2fcx0Oq1S7nGIGZiR66loghp03nDoCcxblsxBcFV9CNq6yVG\nBkEAFzMb/AHxqn7KbZeN6g4Los+3Dr7eFYPGUkVEXy+AbHqE+b99pT2TIjCOMh/L\nwXOE+nX3KXbD5MCqvmF2K6w+MfIf3AxzzgirwXyLewSP8NKBmsdBtgwbgFam1QfM\nUqt+dghxtOQ=\n=LCNn\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nFor the oldstable distribution (buster), these problems have been fixed\nin version 2.34.6-1~deb10u1. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.34.6-1~deb11u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". \nCVE-2022-22591: Antonio Zekic (@antoniozekic) of Diverto\n\nIOMobileFrameBuffer\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges. \n\nPackageKit\nWe would like to acknowledge Mickey Jin (@patch1t), Mickey Jin\n(@patch1t) of Trend Micro for their assistance. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-01-26-1 iOS 15.3 and iPadOS 15.3\n\niOS 15.3 and iPadOS 15.3 addresses the following issues. \n\nColorSync\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22584: Mickey Jin (@patch1t) of Trend Micro\n\nCrash Reporter\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22578: an anonymous researcher\n\niCloud\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to access a user\u0027s files\nDescription: An issue existed within the path validation logic for\nsymlinks. \nCVE-2022-22585: Zhipeng Huo (@R3dF09) of Tencent Security Xuanwu Lab\n(https://xlab.tencent.com)\n\nIOMobileFrameBuffer\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges. \nCVE-2022-22587: an anonymous researcher, Meysam Firouzi (@R00tkitSMM)\nof MBition - Mercedes-Benz Innovation Lab, Siddharth Aeri\n(@b1n4r1b01)\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2022-22593: Peter Nguy\u1ec5n V\u0169 Ho\u00e0ng of STAR Labs\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted STL file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An information disclosure issue was addressed with\nimproved state management. \nCVE-2022-22579: Mickey Jin (@patch1t) of Trend Micro\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted mail message may lead to\nrunning arbitrary javascript\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2022-22589: Heige of KnownSec 404 Team (knownsec.com) and Bo Qu\nof Palo Alto Networks (paloaltonetworks.com)\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22590: Toan Pham from Team Orca of Sea Security\n(security.sea.com)\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may prevent\nContent Security Policy from being enforced\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22592: Prakash (@1lastBr3ath)\n\nWebKit Storage\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A website may be able to track sensitive user information\nDescription: A cross-origin issue in the IndexDB API was addressed\nwith improved input validation. \nCVE-2022-22594: Martin Bajanik of FingerprintJS\n\nAdditional recognition\n\nWebKit\nWe would like to acknowledge Prakash (@1lastBr3ath) for their\nassistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \u201c15.3\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmHx0vIACgkQeC9qKD1p\nrhj4hBAAuITBqrZx38zr9+MFgchRltErtLD/ZVUZ0mYD/bbVaF8+2RwoP0d3XBHj\nhkiZAqV8LCe+r9qs2SHBxXZteEEs79R1AIzZCSAjMU9LUK8yXYgHC5BGDoanRmes\nyuFyrWp78zz5Ix3jop5SUTt0xcxSOK49m7Oozrgr4sfzDg83VzDF9ebna+Obcar1\nWArT/yhPC35dwJ5tOJ0Xmdogb3gPEk+ccjw885UpjnQqnkX8g0KOUzRSp/BYwexM\nvea9a7z3IGrCHaU8rlJWX+GupMUgRtpZr/k6jCzwT7g4BDRYSMYFvJcKZF6xFNgy\nraxl8Vdm+ZhTK//YNFl7BB1aKixVzI6i85aegtOErUPRwzICD1NDlQK5q3ErBpp+\n5FTvuwn7SWy5BPkSIOwmfoJfGrWTzDmdOAajM5o6Yy5m/OnR5ZqK4egfvwmPjoEy\nlx9ffhcvm7HbQmLjO4DTQlpqiyk3UmMmE5MEG4QSMA5UOqMinjE0kl+2JEkV7cmt\nUgkcc4Auu7jUM3YxCkPfMi/x4/t52BBJbIXzpLnj2qebpci7GW9c3aDPNoQbTty9\n+Y1amSmQvVRlqKGEi2xlVKGqN0uduhanyiL6+tt2Q1Afo/jf6JjERVUrOGl/Fv7r\nsJKt1GE0w3uJ6RQVQ6C3w33HTmzNWwzfdy+I8Ik3Cn8ZgfHY3JA=\n=JRMz\n-----END PGP SIGNATURE-----\n\n\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202208-39\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebKitGTK+: Multiple Vulnerabilities\n     Date: August 31, 2022\n     Bugs: #866494, #864427, #856445, #861740, #837305, #845252, #839984, #833568, #832990\n       ID: 202208-39\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk        \u003c 2.36.7                    \u003e= 2.36.7\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebKitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.36.7\"\n\nReferences\n=========\n[ 1 ] CVE-2022-2294\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2294\n[ 2 ] CVE-2022-22589\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22589\n[ 3 ] CVE-2022-22590\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22590\n[ 4 ] CVE-2022-22592\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22592\n[ 5 ] CVE-2022-22620\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22620\n[ 6 ] CVE-2022-22624\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22624\n[ 7 ] CVE-2022-22628\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22628\n[ 8 ] CVE-2022-22629\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22629\n[ 9 ] CVE-2022-22662\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22662\n[ 10 ] CVE-2022-22677\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22677\n[ 11 ] CVE-2022-26700\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26700\n[ 12 ] CVE-2022-26709\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26709\n[ 13 ] CVE-2022-26710\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26710\n[ 14 ] CVE-2022-26716\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26716\n[ 15 ] CVE-2022-26717\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26717\n[ 16 ] CVE-2022-26719\n      https://nvd.nist.gov/vuln/detail/CVE-2022-26719\n[ 17 ] CVE-2022-30293\n      https://nvd.nist.gov/vuln/detail/CVE-2022-30293\n[ 18 ] CVE-2022-30294\n      https://nvd.nist.gov/vuln/detail/CVE-2022-30294\n[ 19 ] CVE-2022-32784\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32784\n[ 20 ] CVE-2022-32792\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32792\n[ 21 ] CVE-2022-32893\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32893\n[ 22 ] WSA-2022-0002\n      https://webkitgtk.org/security/WSA-2022-0002.html\n[ 23 ] WSA-2022-0003\n      https://webkitgtk.org/security/WSA-2022-0003.html\n[ 24 ] WSA-2022-0007\n      https://webkitgtk.org/security/WSA-2022-0007.html\n[ 25 ] WSA-2022-0008\n      https://webkitgtk.org/security/WSA-2022-0008.html\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202208-39\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      },
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165777"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "165771"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      }
    ],
    "trust": 1.8
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-411218",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-22590",
        "trust": 2.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165777",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168226",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "167037",
        "trust": 0.8
      },
      {
        "db": "CS-HELP",
        "id": "SB2022022120",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022012637",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022020932",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022051140",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0844",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0409",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0724",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0899",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165775",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165772",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165771",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165776",
        "trust": 0.2
      },
      {
        "db": "VULHUB",
        "id": "VHN-411218",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22590",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169229",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165777"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "165771"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "id": "VAR-202201-0567",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:04:09.535000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Apple Safari Remediation of resource management error vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=182131"
      },
      {
        "title": "Debian Security Advisories: DSA-5083-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=1e1726cb3c6d9dabbfb6d6be4668f64f"
      },
      {
        "title": "Debian Security Advisories: DSA-5084-1 wpewebkit -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=fad7bdb7356c54203c2fb7db9019fb4f"
      },
      {
        "title": "Apple: iOS 15.3 and iPadOS 15.3",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=05e71c916b30e0c013cc3ece80cc9189"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2088",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2088"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/Live-Hack-CVE/CVE-2022-22590 "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-416",
        "trust": 1.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://support.apple.com/en-us/ht213058"
      },
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202208-39"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213053"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213054"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213057"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213059"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22590"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-22590"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22592"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022022120"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168226/gentoo-linux-security-advisory-202208-39.html"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-22590/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022020932"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-macos-multiple-vulnerabilities-37394"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165777/apple-security-advisory-2022-01-26-7.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167037/red-hat-security-advisory-2022-1777-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022012637"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022051140"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0409"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-three-vulnerabilities-37548"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0724"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0844"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0899"
      },
      {
        "trust": 0.5,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22594"
      },
      {
        "trust": 0.5,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22584"
      },
      {
        "trust": 0.4,
        "url": "https://xlab.tencent.com)"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22593"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22585"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22578"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22579"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22620"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22587"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/416.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/live-hack-cve/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/2022/dsa-5083"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2023-2088.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22592"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22637"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22620"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213058."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213059."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213057."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22586"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213054."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22591"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22583"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213053."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26719"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22677"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2294"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30293"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0008.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30294"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26717"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26700"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26716"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32893"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32792"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0003.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32784"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0007.html"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165777"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "165771"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "db": "PACKETSTORM",
        "id": "165777"
      },
      {
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "db": "PACKETSTORM",
        "id": "165771"
      },
      {
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-03-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "date": "2022-03-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "date": "2022-05-11T15:50:41",
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "date": "2022-02-28T20:12:00",
        "db": "PACKETSTORM",
        "id": "169229"
      },
      {
        "date": "2022-01-31T15:47:24",
        "db": "PACKETSTORM",
        "id": "165777"
      },
      {
        "date": "2022-01-31T15:47:07",
        "db": "PACKETSTORM",
        "id": "165776"
      },
      {
        "date": "2022-01-31T15:46:53",
        "db": "PACKETSTORM",
        "id": "165775"
      },
      {
        "date": "2022-01-31T15:46:05",
        "db": "PACKETSTORM",
        "id": "165772"
      },
      {
        "date": "2022-01-31T15:45:47",
        "db": "PACKETSTORM",
        "id": "165771"
      },
      {
        "date": "2022-09-01T16:33:44",
        "db": "PACKETSTORM",
        "id": "168226"
      },
      {
        "date": "2022-01-26T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      },
      {
        "date": "2022-03-18T18:15:12.623000",
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-411218"
      },
      {
        "date": "2022-09-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-22590"
      },
      {
        "date": "2022-09-02T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      },
      {
        "date": "2024-11-21T06:47:04.970000",
        "db": "NVD",
        "id": "CVE-2022-22590"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple Multiple product resource management errors and vulnerabilities",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "resource management error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2424"
      }
    ],
    "trust": 0.6
  }
}

VAR-201912-1847

Vulnerability from variot - Updated: 2025-12-22 23:01

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * * information leak * * User impersonation * * Arbitrary code execution * * UI Spoofing * * Insufficient access restrictions * * Service operation interruption (DoS) * * Privilege escalation * * Memory corruption * * Authentication bypass. Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple watchOS prior to 6.1; Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes 12.10 .2 version; versions prior to iCloud 7.15 based on the Windows platform. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Description:

Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update Advisory ID: RHSA-2021:0436-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:0436 Issue date: 2021-02-16 CVE Names: CVE-2018-20843 CVE-2019-1551 CVE-2019-5018 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-11068 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15903 CVE-2019-16168 CVE-2019-16935 CVE-2019-18197 CVE-2019-19221 CVE-2019-19906 CVE-2019-19956 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20807 CVE-2019-20907 CVE-2019-20916 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-8177 CVE-2020-8492 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-11793 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15503 CVE-2020-24659 CVE-2020-28362 ==================================================================== 1. Summary:

An update for compliance-content-container, ose-compliance-openscap-container, ose-compliance-operator-container, and ose-compliance-operator-metadata-container is now available for Red Hat OpenShift Container Platform 4.6.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

The compliance-operator image updates are now available for OpenShift Container Platform 4.6.

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling.

Security Fix(es):

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

  1. References:

https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-1551 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-29-4 watchOS 6.1

watchOS 6.1 is now available and addresses the following:

Accounts Available for: Apple Watch Series 1 and later Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at Technische Universität Darmstadt

App Store Available for: Apple Watch Series 1 and later Impact: A local attacker may be able to login to the account of a previously logged in user without valid credentials. CVE-2019-8803: Kiyeon An, 차민규 (CHA Minkyu)

AppleFirmwareUpdateKext Available for: Apple Watch Series 1 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption vulnerability was addressed with improved locking. CVE-2019-8785: Ian Beer of Google Project Zero CVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure

Contacts Available for: Apple Watch Series 1 and later Impact: Processing a maliciously contact may lead to UI spoofing Description: An inconsistent user interface issue was addressed with improved state management. CVE-2017-7152: Oliver Paukstadt of Thinking Objects GmbH (to.com)

File System Events Available for: Apple Watch Series 1 and later Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8798: ABC Research s.r.o. working with Trend Micro's Zero Day Initiative

Kernel Available for: Apple Watch Series 1 and later Impact: An application may be able to read restricted memory Description: A validation issue was addressed with improved input sanitization. CVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure

Kernel Available for: Apple Watch Series 1 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8750: found by OSS-Fuzz

VoiceOver Available for: Apple Watch Series 1 and later Impact: A person with physical access to an iOS device may be able to access contacts from the lock screen Description: The issue was addressed by restricting options offered on a locked device.

CFNetwork We would like to acknowledge Lily Chen of Google for their assistance.

Safari We would like to acknowledge Ron Summers for their assistance.

WebKit We would like to acknowledge Zhiyi Zhang of Codesafe Team of Legendsec at Qi'anxin Group for their assistance.

Installation note:

Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641

To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About".

Alternatively, on your watch, select "My Watch > General > About".

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt c2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2cKA/9 E2JQk+Cjnsyxg4Ri4ofp7B+cerywkBJ8NIrBhILRHqPxq4s912HDu5ZMr6qLWaOW JnT8TQKRAy6+xDe+DZvRiy58x2y0rO4Ma8vNmFPhcztCf5J4/D/LHRD61ZRHth1r eCH31PfbpZ+8WJTvk96ldLahmpXPgoPuAPExcFH4gkfH1w8cgpZr8+c/ZWvw2tt6 3vPs5cWEF9hKh6PyX+O5JOBna7VZs+ux5kP7wy3VP2qqGjz1sAlm+UtWHRvIkjQ8 Cblww/e/XS9J9p9O6RhcpNf6xG0BbaNun/CXIna0UHloD/FRTB0qNzzsTMAHzOcJ xaVpRBu/PxXU4N5va+74gstcXmkBt0Go0H3W4Cwd2ri5ny4H0mAKA2hW+sMEUeAK dtZWtf/EaUMxz7uEgesC7pbYcgxrF1fKPpmDOEkikzmSNIkcp/2CG4fa3+ygZLd6 67nECYGwgs0k7MuuI4S6yD/FJpRPQfamsIbir6JzRAaAIi7HtnEuDVMp1phaZqtv m151JAbP4nUzC4sZhJWhBau9ZcDqfxnlQ+tcaaq/RP+syJYpjrjoP5Q70wSQ9KpJ fEESlMFLFmFNK4bsUEZcgetnoM2X63fln1PZS9TyuwDKGrG7CNQpgBRGcLlh5ErK BOqAybhBQhqDLXfkI2t4yjPwOXDO97KGqUDcHc33dGg= =UBW9 -----END PGP SIGNATURE-----

. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. 1891285 - Common templates and kubevirt-config cm - update machine-type 1891440 - [v2v][VMware to CNV VM import API]Source VM with no network interface fail with unclear error 1892227 - [SSP] cluster scoped resources are not being reconciled 1893278 - openshift-virtualization-os-images namespace not seen by user 1893646 - [HCO] Pod placement configuration - dry run is not performed for all the configuration stanza 1894428 - Message for VMI not migratable is not clear enough 1894824 - [v2v][VM import] Pick the smallest template for the imported VM, and not always Medium 1894897 - [v2v][VMIO] VMimport CR is not reported as failed when target VM is deleted during the import 1895414 - Virt-operator is accepting updates to the placement of its workload components even with running VMs 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1898072 - Add Fedora33 to Fedora common templates 1898840 - [v2v] VM import VMWare to CNV Import 63 chars vm name should not fail 1899558 - CNV 2.6 - nmstate fails to set state 1901480 - VM disk io can't worked if namespace have label kubemacpool 1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig) 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1903014 - hco-webhook pod in CreateContainerError 1903585 - [v2v] Windows 2012 VM imported from RHV goes into Windows repair mode 1904797 - [VMIO][vmware] A migrated RHEL/Windows VM starts in emergency mode/safe mode when target storage is NFS and target namespace is NOT "default" 1906199 - [CNV-2.5] CNV Tries to Install on Windows Workers 1907151 - kubevirt version is not reported correctly via virtctl 1907352 - VM/VMI link changes to kubevirt.io~v1~VirtualMachineInstance on CNV 2.6 1907691 - [CNV] Configuring NodeNetworkConfigurationPolicy caused "Internal error occurred" for creating datavolume 1907988 - VM loses dynamic IP address of its default interface after migration 1908363 - Applying NodeNetworkConfigurationPolicy for different NIC than default disables br-ex bridge and nodes lose connectivity 1908421 - [v2v] [VM import RHV to CNV] Windows imported VM boot failed: INACCESSIBLE BOOT DEVICE error 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1909458 - [V2V][VMware to CNV VM import via api using VMIO] VM import to Ceph RBD/BLOCK fails on "qemu-img: /data/disk.img" error 1910857 - Provide a mechanism to enable the HotplugVolumes feature gate via HCO 1911118 - Windows VMI LiveMigration / shutdown fails on 'XML error: non unique alias detected: ua-') 1911396 - Set networkInterfaceMultiqueue false in rhel 6 template for e1000e interface 1911662 - el6 guests don't work properly if virtio bus is specified on various devices 1912908 - Allow using "scsi" bus for disks in template validation 1913248 - Creating vlan interface on top of a bond device via NodeNetworkConfigurationPolicy fails 1913320 - Informative message needed with virtctl image-upload, that additional step is needed from the user 1913717 - Users should have read permitions for golden images data volumes 1913756 - Migrating to Ceph-RBD + Block fails when skipping zeroes 1914177 - CNV does not preallocate blank file data volumes 1914608 - Obsolete CPU models (kubevirt-cpu-plugin-configmap) are set on worker nodes 1914947 - HPP golden images - DV shoudld not be created with WaitForFirstConsumer 1917908 - [VMIO] vmimport pod fail to create when using ceph-rbd/block 1917963 - [CNV 2.6] Unable to install CNV disconnected - requires kvm-info-nfd-plugin which is not mirrored 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1920576 - HCO can report ready=true when it failed to create a CR for a component operator 1920610 - e2e-aws-4.7-cnv consistently failing on Hyperconverged Cluster Operator 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923979 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool 1927373 - NoExecute taint violates pdb; VMIs are not live migrated 1931376 - VMs disconnected from nmstate-defined bridge after CNV-2.5.4->CNV-2.6.0 upgrade


  1. WebKitGTK and WPE WebKit Security Advisory WSA-2019-0006

Date reported : November 08, 2019 Advisory ID : WSA-2019-0006 WebKitGTK Advisory URL : https://webkitgtk.org/security/WSA-2019-0006.html WPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2019-0006.html CVE identifiers : CVE-2019-8710, CVE-2019-8743, CVE-2019-8764, CVE-2019-8765, CVE-2019-8766, CVE-2019-8782, CVE-2019-8783, CVE-2019-8808, CVE-2019-8811, CVE-2019-8812, CVE-2019-8813, CVE-2019-8814, CVE-2019-8815, CVE-2019-8816, CVE-2019-8819, CVE-2019-8820, CVE-2019-8821, CVE-2019-8822, CVE-2019-8823.

Several vulnerabilities were discovered in WebKitGTK and WPE WebKit.

CVE-2019-8710 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to found by OSS-Fuzz.

CVE-2019-8743 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to zhunki from Codesafe Team of Legendsec at Qi'anxin Group.

CVE-2019-8764 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to Sergei Glazunov of Google Project Zero.

CVE-2019-8765 Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before 2.24.3. Credit to Samuel Groß of Google Project Zero.

CVE-2019-8766 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to found by OSS-Fuzz.

CVE-2019-8782 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to Cheolung Lee of LINE+ Security Team.

CVE-2019-8783 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Cheolung Lee of LINE+ Graylab Security Team.

CVE-2019-8808 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to found by OSS-Fuzz.

CVE-2019-8811 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Soyeon Park of SSLab at Georgia Tech.

CVE-2019-8812 Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before 2.26.2. Credit to an anonymous researcher.

CVE-2019-8813 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to an anonymous researcher.

CVE-2019-8814 Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before 2.26.2. Credit to Cheolung Lee of LINE+ Security Team.

CVE-2019-8815 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to Apple.

CVE-2019-8816 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Soyeon Park of SSLab at Georgia Tech.

CVE-2019-8819 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Cheolung Lee of LINE+ Security Team.

CVE-2019-8820 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Samuel Groß of Google Project Zero.

CVE-2019-8821 Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before 2.24.3. Credit to Sergei Glazunov of Google Project Zero.

CVE-2019-8822 Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before 2.24.3. Credit to Sergei Glazunov of Google Project Zero.

CVE-2019-8823 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Sergei Glazunov of Google Project Zero.

We recommend updating to the latest stable versions of WebKitGTK and WPE WebKit. It is the best way to ensure that you are running safe versions of WebKit. Please check our websites for information about the latest stable releases.

Further information about WebKitGTK and WPE WebKit security advisories can be found at: https://webkitgtk.org/security.html or https://wpewebkit.org/security/.

The WebKitGTK and WPE WebKit team, November 08, 2019

. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state


  1. Gentoo Linux Security Advisory GLSA 202003-22

                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1847",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.2"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "6.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.3"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.15"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 11.0 earlier"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.15 earlier"
      },
      {
        "model": "ios",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "12.10.2 for windows earlier"
      },
      {
        "model": "macos catalina",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.15.1 earlier"
      },
      {
        "model": "macos high sierra",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.13.6 (security update 2019-006 not applied )"
      },
      {
        "model": "macos mojave",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.14.6 (security update 2019-001 not applied )"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.3 earlier"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.1 earlier"
      },
      {
        "model": "xcode",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "11.2 earlier"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_catalina",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_high_sierra",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_mojave",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:watchos",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:xcode",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple,Red Hat,saelo,WebKitGTK+ Team,Gentoo",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2019-8820",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8820",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160255",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8820",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8820",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201910-1775",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160255",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8820",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * * information leak * * User impersonation * * Arbitrary code execution * * UI Spoofing * * Insufficient access restrictions * * Service operation interruption (DoS) * * Privilege escalation * * Memory corruption * * Authentication bypass. Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple watchOS prior to 6.1; Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes 12.10 .2 version; versions prior to iCloud 7.15 based on the Windows platform. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update\nAdvisory ID:       RHSA-2021:0436-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:0436\nIssue date:        2021-02-16\nCVE Names:         CVE-2018-20843 CVE-2019-1551 CVE-2019-5018\n                   CVE-2019-8625 CVE-2019-8710 CVE-2019-8720\n                   CVE-2019-8743 CVE-2019-8764 CVE-2019-8766\n                   CVE-2019-8769 CVE-2019-8771 CVE-2019-8782\n                   CVE-2019-8783 CVE-2019-8808 CVE-2019-8811\n                   CVE-2019-8812 CVE-2019-8813 CVE-2019-8814\n                   CVE-2019-8815 CVE-2019-8816 CVE-2019-8819\n                   CVE-2019-8820 CVE-2019-8823 CVE-2019-8835\n                   CVE-2019-8844 CVE-2019-8846 CVE-2019-11068\n                   CVE-2019-13050 CVE-2019-13627 CVE-2019-14889\n                   CVE-2019-15165 CVE-2019-15903 CVE-2019-16168\n                   CVE-2019-16935 CVE-2019-18197 CVE-2019-19221\n                   CVE-2019-19906 CVE-2019-19956 CVE-2019-20218\n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388\n                   CVE-2019-20454 CVE-2019-20807 CVE-2019-20907\n                   CVE-2019-20916 CVE-2020-1730 CVE-2020-1751\n                   CVE-2020-1752 CVE-2020-1971 CVE-2020-3862\n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867\n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894\n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899\n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902\n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-8177\n                   CVE-2020-8492 CVE-2020-9327 CVE-2020-9802\n                   CVE-2020-9803 CVE-2020-9805 CVE-2020-9806\n                   CVE-2020-9807 CVE-2020-9843 CVE-2020-9850\n                   CVE-2020-9862 CVE-2020-9893 CVE-2020-9894\n                   CVE-2020-9895 CVE-2020-9915 CVE-2020-9925\n                   CVE-2020-10018 CVE-2020-10029 CVE-2020-11793\n                   CVE-2020-13630 CVE-2020-13631 CVE-2020-13632\n                   CVE-2020-14382 CVE-2020-14391 CVE-2020-14422\n                   CVE-2020-15503 CVE-2020-24659 CVE-2020-28362\n====================================================================\n1. Summary:\n\nAn update for compliance-content-container,\nose-compliance-openscap-container, ose-compliance-operator-container, and\nose-compliance-operator-metadata-container is now available for Red Hat\nOpenShift Container Platform 4.6. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThe compliance-operator image updates are now available for OpenShift\nContainer Platform 4.6. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. \n\nSecurity Fix(es):\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-1551\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-29-4 watchOS 6.1\n\nwatchOS 6.1 is now available and addresses the following:\n\nAccounts\nAvailable for: Apple Watch Series 1 and later\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at\nTechnische Universit\u00e4t Darmstadt\n\nApp Store\nAvailable for: Apple Watch Series 1 and later\nImpact: A local attacker may be able to login to the account of a\npreviously logged in user without valid credentials. \nCVE-2019-8803: Kiyeon An, \ucc28\ubbfc\uaddc (CHA Minkyu)\n\nAppleFirmwareUpdateKext\nAvailable for: Apple Watch Series 1 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption vulnerability was addressed with\nimproved locking. \nCVE-2019-8785: Ian Beer of Google Project Zero\nCVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure\n\nContacts\nAvailable for: Apple Watch Series 1 and later\nImpact: Processing a maliciously contact may lead to UI spoofing\nDescription: An inconsistent user interface issue was addressed with\nimproved state management. \nCVE-2017-7152: Oliver Paukstadt of Thinking Objects GmbH (to.com)\n\nFile System Events\nAvailable for: Apple Watch Series 1 and later\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8798: ABC Research s.r.o. working with Trend Micro\u0027s Zero\nDay Initiative\n\nKernel\nAvailable for: Apple Watch Series 1 and later\nImpact: An application may be able to read restricted memory\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure\n\nKernel\nAvailable for: Apple Watch Series 1 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8750: found by OSS-Fuzz\n\nVoiceOver\nAvailable for: Apple Watch Series 1 and later\nImpact: A person with physical access to an iOS device may be able to\naccess contacts from the lock screen\nDescription: The issue was addressed by restricting options offered\non a locked device. \n\nCFNetwork\nWe would like to acknowledge Lily Chen of Google for their\nassistance. \n\nSafari\nWe would like to acknowledge Ron Summers for their assistance. \n\nWebKit\nWe would like to acknowledge Zhiyi Zhang of Codesafe Team of\nLegendsec at Qi\u0027anxin Group for their assistance. \n\nInstallation note:\n\nInstructions on how to update your Apple Watch software are\navailable at https://support.apple.com/kb/HT204641\n\nTo check the version on your Apple Watch, open the Apple Watch app\non your iPhone and select \"My Watch \u003e General \u003e About\". \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". \n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt\nc2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2cKA/9\nE2JQk+Cjnsyxg4Ri4ofp7B+cerywkBJ8NIrBhILRHqPxq4s912HDu5ZMr6qLWaOW\nJnT8TQKRAy6+xDe+DZvRiy58x2y0rO4Ma8vNmFPhcztCf5J4/D/LHRD61ZRHth1r\neCH31PfbpZ+8WJTvk96ldLahmpXPgoPuAPExcFH4gkfH1w8cgpZr8+c/ZWvw2tt6\n3vPs5cWEF9hKh6PyX+O5JOBna7VZs+ux5kP7wy3VP2qqGjz1sAlm+UtWHRvIkjQ8\nCblww/e/XS9J9p9O6RhcpNf6xG0BbaNun/CXIna0UHloD/FRTB0qNzzsTMAHzOcJ\nxaVpRBu/PxXU4N5va+74gstcXmkBt0Go0H3W4Cwd2ri5ny4H0mAKA2hW+sMEUeAK\ndtZWtf/EaUMxz7uEgesC7pbYcgxrF1fKPpmDOEkikzmSNIkcp/2CG4fa3+ygZLd6\n67nECYGwgs0k7MuuI4S6yD/FJpRPQfamsIbir6JzRAaAIi7HtnEuDVMp1phaZqtv\nm151JAbP4nUzC4sZhJWhBau9ZcDqfxnlQ+tcaaq/RP+syJYpjrjoP5Q70wSQ9KpJ\nfEESlMFLFmFNK4bsUEZcgetnoM2X63fln1PZS9TyuwDKGrG7CNQpgBRGcLlh5ErK\nBOqAybhBQhqDLXfkI2t4yjPwOXDO97KGqUDcHc33dGg=\n=UBW9\n-----END PGP SIGNATURE-----\n\n\n. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. \n1891285 - Common templates and kubevirt-config cm - update machine-type\n1891440 - [v2v][VMware to CNV VM import API]Source VM with no network interface fail with unclear error\n1892227 - [SSP] cluster scoped resources are not being reconciled\n1893278 - openshift-virtualization-os-images namespace not seen by user\n1893646 - [HCO] Pod placement configuration - dry run is not performed for all the configuration stanza\n1894428 - Message for VMI not migratable is not clear enough\n1894824 - [v2v][VM import] Pick the smallest template for the imported VM, and not always Medium\n1894897 - [v2v][VMIO] VMimport CR is not reported as failed when target VM is deleted during the import\n1895414 - Virt-operator is accepting updates to the placement of its workload components even with running VMs\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1898072 - Add Fedora33 to Fedora common templates\n1898840 - [v2v] VM import VMWare to CNV Import 63 chars vm name should not fail\n1899558 - CNV 2.6 - nmstate fails to set state\n1901480 - VM disk io can\u0027t worked if namespace have label kubemacpool\n1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig)\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1903014 - hco-webhook pod in CreateContainerError\n1903585 - [v2v] Windows 2012 VM imported from RHV goes into Windows repair mode\n1904797 - [VMIO][vmware] A migrated RHEL/Windows VM starts in emergency mode/safe mode when target storage is NFS and target namespace is NOT \"default\"\n1906199 - [CNV-2.5] CNV Tries to Install on Windows Workers\n1907151 - kubevirt version is not reported correctly via virtctl\n1907352 - VM/VMI link changes to `kubevirt.io~v1~VirtualMachineInstance` on CNV 2.6\n1907691 - [CNV] Configuring NodeNetworkConfigurationPolicy caused \"Internal error occurred\" for creating datavolume\n1907988 - VM loses dynamic IP address of its default interface after migration\n1908363 - Applying NodeNetworkConfigurationPolicy for different NIC than default disables br-ex bridge and nodes lose connectivity\n1908421 - [v2v] [VM import RHV to CNV] Windows imported VM boot failed: INACCESSIBLE BOOT DEVICE error\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1909458 - [V2V][VMware to CNV VM import via api using VMIO] VM import  to Ceph RBD/BLOCK fails on \"qemu-img: /data/disk.img\" error\n1910857 - Provide a mechanism to enable the HotplugVolumes feature gate via HCO\n1911118 - Windows VMI LiveMigration / shutdown fails on \u0027XML error: non unique alias detected: ua-\u0027)\n1911396 - Set networkInterfaceMultiqueue false in rhel 6 template for e1000e interface\n1911662 - el6 guests don\u0027t work properly if virtio bus is specified on various devices\n1912908 - Allow using \"scsi\" bus for disks in template validation\n1913248 - Creating vlan interface on top of a bond device via NodeNetworkConfigurationPolicy fails\n1913320 - Informative message needed with virtctl image-upload, that additional step is needed from the user\n1913717 - Users should have read permitions for golden images data volumes\n1913756 - Migrating to Ceph-RBD + Block fails when skipping zeroes\n1914177 - CNV does not preallocate blank file data volumes\n1914608 - Obsolete CPU models (kubevirt-cpu-plugin-configmap) are set on worker nodes\n1914947 - HPP golden images - DV shoudld not be created with WaitForFirstConsumer\n1917908 - [VMIO] vmimport pod fail to create when using ceph-rbd/block\n1917963 - [CNV 2.6] Unable to install CNV disconnected - requires kvm-info-nfd-plugin which is not mirrored\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1920576 - HCO can report ready=true when it failed to create a CR for a component operator\n1920610 - e2e-aws-4.7-cnv consistently failing on Hyperconverged Cluster Operator\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923979 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool\n1927373 - NoExecute taint violates pdb; VMIs are not live migrated\n1931376 - VMs disconnected from nmstate-defined bridge after CNV-2.5.4-\u003eCNV-2.6.0 upgrade\n\n5. ------------------------------------------------------------------------\nWebKitGTK and WPE WebKit Security Advisory                 WSA-2019-0006\n------------------------------------------------------------------------\n\nDate reported           : November 08, 2019\nAdvisory ID             : WSA-2019-0006\nWebKitGTK Advisory URL  : https://webkitgtk.org/security/WSA-2019-0006.html\nWPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2019-0006.html\nCVE identifiers         : CVE-2019-8710, CVE-2019-8743, CVE-2019-8764,\n                          CVE-2019-8765, CVE-2019-8766, CVE-2019-8782,\n                          CVE-2019-8783, CVE-2019-8808, CVE-2019-8811,\n                          CVE-2019-8812, CVE-2019-8813, CVE-2019-8814,\n                          CVE-2019-8815, CVE-2019-8816, CVE-2019-8819,\n                          CVE-2019-8820, CVE-2019-8821, CVE-2019-8822,\n                          CVE-2019-8823. \n\nSeveral vulnerabilities were discovered in WebKitGTK and WPE WebKit. \n\nCVE-2019-8710\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to found by OSS-Fuzz. \n\nCVE-2019-8743\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to zhunki from Codesafe Team of Legendsec at Qi\u0027anxin Group. \n\nCVE-2019-8764\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to Sergei Glazunov of Google Project Zero. \n\nCVE-2019-8765\n    Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before\n    2.24.3. \n    Credit to Samuel Gro\u00df of Google Project Zero. \n\nCVE-2019-8766\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to found by OSS-Fuzz. \n\nCVE-2019-8782\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to Cheolung Lee of LINE+ Security Team. \n\nCVE-2019-8783\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Cheolung Lee of LINE+ Graylab Security Team. \n\nCVE-2019-8808\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to found by OSS-Fuzz. \n\nCVE-2019-8811\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Soyeon Park of SSLab at Georgia Tech. \n\nCVE-2019-8812\n    Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before\n    2.26.2. \n    Credit to an anonymous researcher. \n\nCVE-2019-8813\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to an anonymous researcher. \n\nCVE-2019-8814\n    Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before\n    2.26.2. \n    Credit to Cheolung Lee of LINE+ Security Team. \n\nCVE-2019-8815\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to Apple. \n\nCVE-2019-8816\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Soyeon Park of SSLab at Georgia Tech. \n\nCVE-2019-8819\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Cheolung Lee of LINE+ Security Team. \n\nCVE-2019-8820\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Samuel Gro\u00df of Google Project Zero. \n\nCVE-2019-8821\n    Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before\n    2.24.3. \n    Credit to Sergei Glazunov of Google Project Zero. \n\nCVE-2019-8822\n    Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before\n    2.24.3. \n    Credit to Sergei Glazunov of Google Project Zero. \n\nCVE-2019-8823\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Sergei Glazunov of Google Project Zero. \n\n\nWe recommend updating to the latest stable versions of WebKitGTK and WPE\nWebKit. It is the best way to ensure that you are running safe versions\nof WebKit. Please check our websites for information about the latest\nstable releases. \n\nFurther information about WebKitGTK and WPE WebKit security advisories\ncan be found at: https://webkitgtk.org/security.html or\nhttps://wpewebkit.org/security/. \n\nThe WebKitGTK and WPE WebKit team,\nNovember 08, 2019\n\n. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nBackground\n==========\n\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155065"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      }
    ],
    "trust": 2.52
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-160255",
        "trust": 0.1,
        "type": "unknown"
      },
      {
        "reference": "https://vulmon.com/exploitdetails?qidtp=exploitdb\u0026qid=47590",
        "trust": 0.1,
        "type": "exploit"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8820",
        "trust": 3.4
      },
      {
        "db": "JVN",
        "id": "JVNVU96749516",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155112",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775",
        "trust": 0.7
      },
      {
        "db": "EXPLOIT-DB",
        "id": "47590",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155216",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4013",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155069",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-160255",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "155065",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155065"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "id": "VAR-201912-1847",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T23:01:13.747000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "About the security content of iCloud for Windows 11.0",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210727"
      },
      {
        "title": "About the security content of iCloud for Windows 7.15",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210728"
      },
      {
        "title": "About the security content of iOS 13.2 and iPadOS 13.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210721"
      },
      {
        "title": "About the security content of Xcode 11.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210729"
      },
      {
        "title": "About the security content of macOS Catalina 10.15.1, Security Update 2019-001, and Security Update 2019-006",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210722"
      },
      {
        "title": "About the security content of tvOS 13.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210723"
      },
      {
        "title": "About the security content of watchOS 6.1",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210724"
      },
      {
        "title": "About the security content of Safari 13.0.3",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210725"
      },
      {
        "title": "About the security content of iTunes 12.10.2 for Windows",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210726"
      },
      {
        "title": "Mac \u306b\u642d\u8f09\u3055\u308c\u3066\u3044\u308b macOS \u3092\u8abf\u3079\u308b",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT201260"
      },
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=105611"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      },
      {
        "title": "fuzzilli",
        "trust": 0.1,
        "url": "https://github.com/googleprojectzero/fuzzilli "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210721"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210723"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210724"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210725"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210726"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210727"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210728"
      },
      {
        "trust": 1.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 1.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 1.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8750"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8786"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8787"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8747"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8794"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8798"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8775"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8797"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8803"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8785"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8788"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8803"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8815"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8766"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8735"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8789"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8804"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8816"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8775"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8793"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8805"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8710"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8819"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8782"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8794"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8807"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8743"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8820"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8783"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8795"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8811"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8747"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8821"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8784"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8797"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8812"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8750"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8822"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8785"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8798"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8813"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8764"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8823"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8786"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8802"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8814"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8765"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8787"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96749516/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8802"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8788"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8804"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8789"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8805"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8793"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8807"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8784"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8795"
      },
      {
        "trust": 0.7,
        "url": "https://www.exploit-db.com/exploits/47590"
      },
      {
        "trust": 0.7,
        "url": "https://wpewebkit.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.7,
        "url": "https://webkitgtk.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210725"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210728"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155069/apple-security-advisory-2019-10-29-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155216/webkitgtk-wpe-webkit-code-execution-xss.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4013/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4233/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-30746"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155112/jsc-argument-object-reconstruction-type-confusion.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/googleprojectzero/fuzzilli"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-7152"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://wpewebkit.org/security/."
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155065"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155065"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "date": "2022-08-09T14:36:05",
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2019-11-01T17:10:20",
        "db": "PACKETSTORM",
        "id": "155065"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2019-11-08T15:45:31",
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2019-10-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "date": "2019-11-01T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "date": "2019-12-18T18:15:44.507000",
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160255"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8820"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      },
      {
        "date": "2020-01-07T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "date": "2024-11-21T04:50:32.270000",
        "db": "NVD",
        "id": "CVE-2019-8820"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Updates to product vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1775"
      }
    ],
    "trust": 0.6
  }
}

VAR-202105-1459

Vulnerability from variot - Updated: 2025-12-22 22:59

A flaw was found in libwebp in versions before 1.0.1. An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability. libwebp Is vulnerable to an out-of-bounds read.Information is obtained and denial of service (DoS) It may be put into a state. libwebp is an encoding and decoding library for the WebP image format. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512


Debian Security Advisory DSA-4930-1 security@debian.org https://www.debian.org/security/ Moritz Muehlenhoff June 10, 2021 https://www.debian.org/security/faq


Package : libwebp CVE ID : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332

Multiple vulnerabilities were discovered in libwebp, the implementation of the WebP image format, which could result in denial of service, memory disclosure or potentially the execution of arbitrary code if malformed images are processed.

For the stable distribution (buster), these problems have been fixed in version 0.6.1-2+deb10u1.

We recommend that you upgrade your libwebp packages.

For the detailed security status of libwebp please refer to its security tracker page at: https://security-tracker.debian.org/tracker/libwebp

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmDCfg0ACgkQEMKTtsN8 TjaaKBAAqMJfe5aH4Gh14SpB7h2S5JJUK+eo/aPo1tXn7BoLiF4O5g05+McyUOdE HI9ibolUfv+HoZlCDC93MBJvopWgd1/oqReHML5n2GXPBESYXpRstL04qwaRqu9g AvofhX88EwHefTXmljVTL4W1KgMJuhhPxVLdimxoqd0/hjagZtA7B7R05khigC5k nHMFoRogSPjI9H4vI2raYaOqC26zmrZNbk/CRVhuUbtDOG9qy9okjc+6KM9RcbXC ha++EhrGXPjCg5SwrQAZ50nW3Jwif2WpSeULfTrqHr2E8nHGUCHDMMtdDwegFH/X FK0dVaNPgrayw1Dji+fhBQz3qR7pl/1DK+gsLtREafxY0+AxZ57kCi51CykT/dLs eC4bOPaoho91KuLFrT+X/AyAASS/00VuroFJB4sWQUvEpBCnWPUW1m3NvjsyoYuj 0wmQMVM5Bb/aYuWAM+/V9MeoklmtIn+OPAXqsVvLxdbB0GScwJV86/NvsN6Nde6c twImfMCK1V75FPrIsxx37M52AYWvALgXbWoVi4aQPyPeDerQdgUPL1FzTGzem0NQ PnXhuE27H/pJz79DosW8md0RFr+tfPgZ8CeTirXSUUXFiqhcXR/w1lqN2vlmfm8V dmwgzvu9A7ZhG++JRqbbMx2D+NS4coGgRdA7XPuRrdNKniRIDhQ= =pN/j -----END PGP SIGNATURE----- . Bugs fixed (https://bugzilla.redhat.com/):

1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable

  1. Summary:

The Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:

For details on how to install and use MTC, refer to:

https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

2019088 - "MigrationController" CR displays syntax error when unquiescing applications 2021666 - Route name longer than 63 characters causes direct volume migration to fail 2021668 - "MigrationController" CR ignores the "cluster_subdomain" value for direct volume migration routes 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) 2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image 2027196 - "migration-controller" pod goes into "CrashLoopBackoff" state if an invalid registry route is entered on the "Clusters" page of the web console 2027382 - "Copy oc describe/oc logs" window does not close automatically after timeout 2028841 - "rsync-client" container fails during direct volume migration with "Address family not supported by protocol" error 2031793 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "includedResources" resource 2039852 - "migration-controller" pod goes into "CrashLoopBackOff" state if "MigPlan" CR contains an invalid "destMigClusterRef" or "srcMigClusterRef"

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:

Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.11.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:5068

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html

Security Fix(es):

  • go-getter: command injection vulnerability (CVE-2022-26945)
  • go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
  • go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
  • go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
  • nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
  • sanitize-url: XSS (CVE-2021-23648)
  • minimist: prototype pollution (CVE-2021-44906)
  • node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
  • golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
  • go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
  • opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64

The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4

(For aarch64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64

The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x

The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le

The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca

All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts 1822752 - cluster-version operator stops applying manifests when blocked by a precondition check 1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image 1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV 1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name 1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource 1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group 1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready 1905850 - oc adm policy who-can failed to check the operatorcondition/status resource 1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug) 1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource 1918005 - [vsphere] If there are multiple port groups with the same name installation fails 1918417 - IPv6 errors after exiting crictl 1918690 - Should update the KCM resource-graph timely with the latest configure 1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok" 1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded 1923536 - Image pullthrough does not pass 429 errors back to capable clients 1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API 1932812 - Installer uses the terraform-provider in the Installer's directory if it exists 1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value 1943937 - CatalogSource incorrect parsing validation 1944264 - [ovn] CNO should gracefully terminate OVN databases 1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2 1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled 1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV 1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x 1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap 1957668 - oc login does not show link to console 1958198 - authentication operator takes too long to pick up a configuration change 1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true 1961233 - Add CI test coverage for DNS availability during upgrades 1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects 1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata 1965934 - can not get new result with "Refresh off" if click "Run queries" again 1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. 1968253 - GCP CSI driver can provision volume with access mode ROX 1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones 1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases 1976111 - [tracker] multipathd.socket is missing start conditions 1976782 - Openshift registry starts to segfault after S3 storage configuration 1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory" 1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"] 1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8 1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 1982737 - OLM does not warn on invalid CSV 1983056 - IP conflict while recreating Pod with fixed name 1984785 - LSO CSV does not contain disconnected annotation 1989610 - Unsupported data types should not be rendered on operand details page 1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager 1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1994117 - Some hardcodes are detected at the code level in orphaned code 1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs 1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods 1996544 - AWS region ap-northeast-3 is missing in installer prompt 1996638 - Helm operator manager container restart when CR is creating&deleting 1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace 1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow 1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc 1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered 1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource 1999891 - must-gather collects backup data even when Pods fails to be created 2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap 2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks 2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap 2002868 - Node exporter not able to scrape OVS metrics 2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet 2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO 2006067 - Objects are not valid as a React child 2006201 - ovirt-csi-driver-node pods are crashing intermittently 2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment 2007340 - Accessibility issues on topology - list view 2007611 - TLS issues with the internal registry and AWS S3 bucket 2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge 2008486 - Double scroll bar shows up on dragging the task quick search to the bottom 2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19 2009352 - Add image-registry usage metrics to telemeter 2009845 - Respect overrides changes during installation 2010361 - OpenShift Alerting Rules Style-Guide Compliance 2010364 - OpenShift Alerting Rules Style-Guide Compliance 2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel] 2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS 2011895 - Details about cloud errors are missing from PV/PVC errors 2012111 - LSO still try to find localvolumeset which is already deleted 2012969 - need to figure out why osupdatedstart to reboot is zero seconds 2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine) 2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user 2013734 - unable to label downloads route in openshift-console namespace 2013822 - ensure that the container-tools content comes from the RHAOS plashets 2014161 - PipelineRun logs are delayed and stuck on a high log volume 2014240 - Image registry uses ICSPs only when source exactly matches image 2014420 - Topology page is crashed 2014640 - Cannot change storage class of boot disk when cloning from template 2015023 - Operator objects are re-created even after deleting it 2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance 2015356 - Different status shows on VM list page and details page 2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types 2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff 2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource 2016534 - externalIP does not work when egressIP is also present 2017001 - Topology context menu for Serverless components always open downwards 2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs 2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI 2019532 - Logger object in LSO does not log source location accurately 2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted 2020483 - Parameter $auto_interval_period is in Period drop-down list 2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working 2021041 - [vsphere] Not found TagCategory when destroying ipi cluster 2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible 2022253 - Web terminal view is broken 2022507 - Pods stuck in OutOfpods state after running cluster-density 2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment 2022745 - Cluster reader is not able to list NodeNetwork objects 2023295 - Must-gather tool gathering data from custom namespaces. 2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes 2024427 - oc completion zsh doesn't auto complete 2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" ) 2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation 2026356 - [IPI on Azure] The bootstrap machine type should be same as master 2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted 2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page 2027613 - Users can't silence alerts from the dev console 2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition 2028532 - noobaa-pg-db-0 pod stuck in Init:0/2 2028821 - Misspelled label in ODF management UI - MCG performance view 2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf 2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision 2029797 - Uncaught exception: ResizeObserver loop limit exceeded 2029835 - CSI migration for vSphere: Inline-volume tests failing 2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host 2030530 - VM created via customize wizard has single quotation marks surrounding its password 2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled 2030776 - e2e-operator always uses quay master images during presubmit tests 2032559 - CNO allows migration to dual-stack in unsupported configurations 2032717 - Unable to download ignition after coreos-installer install --copy-network 2032924 - PVs are not being cleaned up after PVC deletion 2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation 2033575 - monitoring targets are down after the cluster run for more than 1 day 2033711 - IBM VPC operator needs e2e csi tests for ibmcloud 2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address 2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4 2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37 2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save 2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated 2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready 2035005 - MCD is not always removing in progress taint after a successful update 2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks 2035899 - Operator-sdk run bundle doesn't support arm64 env 2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work 2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd 2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF 2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default 2037447 - Ingress Operator is not closing TCP connections. 2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found 2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height 2037610 - typo for the Terminated message from thanos-querier pod description info 2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10 2037625 - AppliedClusterResourceQuotas can not be shown on project overview 2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption 2037628 - Add test id to kms flows for automation 2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster 2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied 2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack 2038115 - Namespace and application bar is not sticky anymore 2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations 2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken 2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group 2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image 2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected 2039253 - ovnkube-node crashes on duplicate endpoints 2039256 - Domain validation fails when TLD contains a digit. 2039277 - Topology list view items are not highlighted on keyboard navigation 2039462 - Application tab in User Preferences dropdown menus are too wide. 2039477 - validation icon is missing from Import from git 2039589 - The toolbox command always ignores [command] the first time 2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project 2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column 2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names 2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong 2040488 - OpenShift-Ansible BYOH Unit Tests are Broken 2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard 2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits 2040779 - Nodeport svc not accessible when the backend pod is on a window node 2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes 2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted' 2041454 - Garbage values accepted for --reference-policy in oc import-image without any error 2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work 2041769 - Pipeline Metrics page not showing data for normal user 2041774 - Failing git detection should not recommend Devfiles as import strategy 2041814 - The KubeletConfigController wrongly process multiple confs for a pool 2041940 - Namespace pre-population not happening till a Pod is created 2042027 - Incorrect feedback for "oc label pods --all" 2042348 - Volume ID is missing in output message when expanding volume which is not mounted. 2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15 2042501 - use lease for leader election 2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps 2042652 - Unable to deploy hw-event-proxy operator 2042838 - The status of container is not consistent on Container details and pod details page 2042852 - Topology toolbars are unaligned to other toolbars 2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP 2043035 - Wrong error code provided when request contains invalid argument 2043068 - available of text disappears in Utilization item if x is 0 2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist 2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away 2043118 - Host should transition through Preparing when HostFirmwareSettings changed 2043132 - Add a metric when vsphere csi storageclass creation fails 2043314 - oc debug node does not meet compliance requirement 2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining 2043428 - Address Alibaba CSI driver operator review comments 2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release 2043672 - [MAPO] root volumes not working 2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade 2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method 2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails 2044412 - Topology list misses separator lines and hover effect let the list jump 1px 2044421 - Topology list does not allow selecting an application group anymore 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2044803 - Unify button text style on VM tabs 2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2045065 - Scheduled pod has nodeName changed 2045073 - Bump golang and build images for local-storage-operator 2045087 - Failed to apply sriov policy on intel nics 2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade 2045559 - API_VIP moved when kube-api container on another master node was stopped 2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation 2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2046133 - [MAPO]IPI proxy installation failed 2046156 - Network policy: preview of affected pods for non-admin shows empty popup 2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config 2046191 - Opeartor pod is missing correct qosClass and priorityClass 2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource 2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob". 2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow 2046496 - Awkward wrapping of project toolbar on mobile 2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests 2046498 - "All Projects" and "all applications" use different casing on topology page 2046591 - Auto-update boot source is not available while create new template from it 2046594 - "Requested template could not be found" while creating VM from user-created template 2046598 - Auto-update boot source size unit is byte on customize wizard 2046601 - Cannot create VM from template 2046618 - Start last run action should contain current user name in the started-by annotation of the PLR 2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator 2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module 2047257 - [CP MIGRATION] Node drain failure during control plane node migration 2047277 - Storage status is missing from status card of virtualization overview 2047308 - Remove metrics and events for master port offsets 2047310 - Running VMs per template card needs empty state when no VMs exist 2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047362 - Removing prometheus UI access breaks origin test 2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. 2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error 2047732 - [IBM]Volume is not deleted after destroy cluster 2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource 2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9 2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController 2047895 - Fix architecture naming in oc adm release mirror for aarch64 2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters 2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot 2047935 - [4.11] Bootimage bump tracker 2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin- 2048059 - Service Level Agreement (SLA) always show 'Unknown' 2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false 2048186 - Image registry operator panics when finalizes config deletion 2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2048221 - Capitalization of titles in the VM details page is inconsistent. 2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. 2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh 2048333 - prometheus-adapter becomes inaccessible during rollout 2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable 2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption 2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy 2048538 - Network policies are not implemented or updated by OVN-Kubernetes 2048541 - incorrect rbac check for install operator quick starts 2048563 - Leader election conventions for cluster topology 2048575 - IP reconciler cron job failing on single node 2048686 - Check MAC address provided on the install-config.yaml file 2048687 - All bare metal jobs are failing now due to End of Life of centos 8 2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr 2048803 - CRI-O seccomp profile out of date 2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added 2048955 - Alibaba Disk CSI Driver does not have CI 2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2049078 - Bond CNI: Failed to attach Bond NAD to pod 2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available' 2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2049142 - Missing "app" label 2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured 2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2049410 - external-dns-operator creates provider section, even when not requested 2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2049613 - MTU migration on SDN IPv4 causes API alerts 2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist 2049687 - superfluous apirequestcount entries in audit log 2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled 2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges 2049889 - oc new-app --search nodejs warns about access to sample content on quay.io 2050005 - Plugin module IDs can clash with console module IDs causing runtime errors 2050011 - Observe > Metrics page: Timespan text input and dropdown do not align 2050120 - Missing metrics in kube-state-metrics 2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050300 - panic in cluster-storage-operator while updating status 2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims 2050335 - azure-disk failed to mount with error special device does not exist 2050345 - alert data for burn budget needs to be updated to prevent regression 2050407 - revert "force cert rotation every couple days for development" in 4.11 2050409 - ip-reconcile job is failing consistently 2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest 2050466 - machine config update with invalid container runtime config should be more robust 2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour 2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes 2050707 - up test for prometheus pod look to far in the past 2050767 - Vsphere upi tries to access vsphere during manifests generation phase 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2050882 - Crio appears to be coredumping in some scenarios 2050902 - not all resources created during import have common labels 2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error 2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11 2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. 2051377 - Unable to switch vfio-pci to netdevice in policy 2051378 - Template wizard is crashed when there are no templates existing 2051423 - migrate loadbalancers from amphora to ovn not working 2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down 2051470 - prometheus: Add validations for relabel configs 2051558 - RoleBinding in project without subject is causing "Project access" page to fail 2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page 2051583 - sriov must-gather image doesn't work 2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2051611 - Remove Check which enforces summary_interval must match logSyncInterval 2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release 2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation 2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s 2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2051954 - Allow changing of policyAuditConfig ratelimit post-deployment 2051969 - Need to build local-storage-operator-metadata-container image for 4.11 2051985 - An APIRequestCount without dots in the name can cause a panic 2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8 2052055 - Whereabouts should implement client-go 1.22+ 2052056 - Static pod installer should throttle creating new revisions 2052071 - local storage operator metrics target down after upgrade 2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos 2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade 2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters 2052415 - Pod density test causing problems when using kube-burner 2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work. 2052595 - Remove dev preview badge from IBM FlashSystem deployment windows 2052618 - Node reboot causes duplicate persistent volumes 2052671 - Add Sprint 214 translations 2052674 - Remove extra spaces 2052700 - kube-controller-manger should use configmap lease 2052701 - kube-scheduler should use configmap lease 2052814 - go fmt fails in OSM after migration to go 1.17 2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker 2052953 - Observe dashboard always opens for last viewed workload instead of the selected one 2052956 - Installing virtualization operator duplicates the first action on workloads in topology 2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26 2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds" 2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13 to vmx-15 2053112 - nncp status is unknown when nnce is Progressing 2053118 - nncp Available condition reason should be exposed in oc get 2053168 - Ensure the core dynamic plugin SDK package has correct types and code 2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time 2053304 - Debug terminal no longer works in admin console 2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053334 - rhel worker scaleup playbook failed because missing some dependency of podman 2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down 2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update 2053501 - Git import detection does not happen for private repositories 2053582 - inability to detect static lifecycle failure 2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization 2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated 2053622 - PDB warning alert when CR replica count is set to zero 2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes) 2053721 - When using RootDeviceHint rotational setting the host can fail to provision 2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids 2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition 2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet 2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer 2054238 - console-master-e2e-gcp-console is broken 2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal 2054319 - must-gather | gather_metallb_logs can't detect metallb pod 2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work 2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13 2054564 - DPU network operator 4.10 branch need to sync with master 2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page 2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4 2054701 - [MAPO] Events are not created for MAPO machines 2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state 2054735 - Bad link in CNV console 2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress 2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions 2054950 - A large number is showing on disk size field 2055305 - Thanos Querier high CPU and memory usage till OOM 2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition 2055433 - Unable to create br-ex as gateway is not found 2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2055492 - The default YAML on vm wizard is not latest 2055601 - installer did not destroy .app dns recored in a IPI on ASH install 2055702 - Enable Serverless tests in CI 2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set. 2055729 - NodePerfCheck fires and stays active on momentary high latency 2055814 - Custom dynamic exntension point causes runtime and compile time error 2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status 2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions 2056454 - Implement preallocated disks for oVirt in the cluster API provider 2056460 - Implement preallocated disks for oVirt in the OCP installer 2056496 - If image does not exists for builder image then upload jar form crashes 2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies 2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters 2056752 - Better to named the oc-mirror version info with more information like the oc version --client 2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect 2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed 2056893 - incorrect warning for --to-image in oc adm upgrade help 2056967 - MetalLB: speaker metrics is not updated when deleting a service 2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high 2057054 - SDK: k8s methods resolves into Response instead of the Resource 2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically 2057101 - oc commands working with images print an incorrect and inappropriate warning 2057160 - configure-ovs selects wrong interface on reboot 2057183 - OperatorHub: Missing "valid subscriptions" filter 2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled 2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle 2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion 2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring" 2057495 - Alibaba Disk CSI driver does not provision small PVCs 2057558 - Marketplace operator polls too frequently for cluster operator status changes 2057633 - oc rsync reports misleading error when container is not found 2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug 2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members 2057696 - Removing console still blocks OCP install from completing 2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used 2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper 2057967 - KubeJobCompletion does not account for possible job states 2057990 - Add extra debug information to image signature workflow test 2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information 2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain 2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused 2058225 - openshift_csi_share_ metrics are not found from telemeter server 2058282 - Websockets stop updating during cluster upgrades 2058291 - CI builds should have correct version of Kube without needing to push tags everytime 2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable 2058370 - e2e-aws-driver-toolkit CI job is failing 2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2058424 - ConsolePlugin proxy always passes Authorization header even if authorize property is omitted or false 2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created 2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root" 2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff 2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found" 2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden 2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa 2059213 - ART cannot build installer images due to missing terraform binaries for some architectures 2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported) 2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect 2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override 2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages 2059654 - Dynamic demo plugin proxy example out of date 2059674 - Demo plugin fails to build 2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update 2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually 2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager 2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo 2060037 - Configure logging level of FRR containers 2060083 - CMO doesn't react to changes in clusteroperator console 2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset 2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found 2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time 2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node 2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology 2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group 2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions 2060406 - Test 'operators should not create watch channels very often' fails 2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4 2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10 2060532 - LSO e2e tests are run against default image and namespace 2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip 2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache! 2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc 2060583 - Remove Console internal-kubevirt plugin SDK package 2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060617 - IBMCloud destroy DNS regex not strict enough 2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus' 2060697 - [AWS] partitionNumber cannot work for specifying Partition number 2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section 2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field 2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page 2060924 - Console white-screens while using debug terminal 2060968 - Installation failing due to ironic-agent.service not starting properly 2060970 - Bump recommended FCOS to 35.20220213.3.0 2061002 - Conntrack entry is not removed for LoadBalancer IP 2061301 - Traffic Splitting Dialog is Confusing With Only One Revision 2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum 2061304 - workload info gatherer - don't serialize empty images map 2061333 - White screen for Pipeline builder page 2061447 - [GSS] local pv's are in terminating state 2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string 2061527 - [IBMCloud] infrastructure asset missing CloudProviderType 2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type 2061549 - AzureStack install with internal publishing does not create api DNS record 2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code 2061732 - Cinder CSI crashes when API is not available 2061755 - Missing breadcrumb on the resource creation page 2061833 - A single worker can be assigned to multiple baremetal hosts 2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer 2061916 - mixed ingress and egress policies can result in half-isolated pods 2061918 - Topology Sidepanel style is broken 2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet 2062007 - MCC bootstrap command lacks template flag 2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist 2062151 - Add RBAC for 'infrastructures' to operator bundle 2062355 - kubernetes-nmstate resources and logs not included in must-gathers 2062459 - Ingress pods scheduled on the same node 2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref 2062558 - Egress IP with openshift sdn in not functional on worker node. 2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload 2062645 - configure-ovs: don't restart networking if not necessary 2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric 2062849 - hw event proxy is not binding on ipv6 local address 2062920 - Project selector is too tall with only a few projects 2062998 - AWS GovCloud regions are recognized as the unknown regions 2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator 2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod 2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available 2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster 2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster 2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments 2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met 2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes 2063699 - Builds - Builds - Logs: i18n misses. 2063708 - Builds - Builds - Logs: translation correction needed. 2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors) 2063732 - Workloads - StatefulSets : I18n misses 2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI 2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. 2063756 - User Preferences - Applications - Insecure traffic : i18n misses 2063795 - Remove go-ovirt-client go.mod replace directive 2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided" 2063831 - etcd quorum pods landing on same node 2063897 - Community tasks not shown in pipeline builder page 2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server 2063938 - sing the hard coded rest-mapper in library-go 2063955 - cannot download operator catalogs due to missing images 2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language 2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod 2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain 2064239 - Virtualization Overview page turns into blank page 2064256 - The Knative traffic distribution doesn't update percentage in sidebar 2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation 2064596 - Fix the hubUrl docs link in pipeline quicksearch modal 2064607 - Pipeline builder makes too many (100+) API calls upfront 2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator 2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2064705 - the alertmanagerconfig validation catches the wrong value for invalid field 2064744 - Errors trying to use the Debug Container feature 2064984 - Update error message for label limits 2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL 2065160 - Possible leak of load balancer targets on AWS Machine API Provider 2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted 2065290 - CVE-2021-23648 sanitize-url: XSS 2065338 - VolumeSnapshot creation date sorting is broken 2065507 - oc adm upgrade should return ReleaseAccepted condition to show upgrade status. 2065510 - [AWS] failed to create cluster on ap-southeast-3 2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places 2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors 2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error 2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap 2065597 - Cinder CSI is not configurable 2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics 2065689 - Internal Image registry with GCS backend does not redirect client 2065749 - Kubelet slowly leaking memory and pods eventually unable to start 2065785 - ip-reconciler job does not complete, halts node drain 2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204 2065806 - stop considering Mint mode as supported on Azure 2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console 2065893 - [4.11] Bootimage bump tracker 2066009 - CVE-2021-44906 minimist: prototype pollution 2066232 - e2e-aws-workers-rhel8 is failing on ansible check 2066418 - [4.11] Update channels information link is taking to a 404 error page 2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names 2066457 - Prometheus CI failure: 503 Service Unavailable 2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified 2066605 - coredns template block matches cluster API to loose 2066615 - Downstream OSDK still use upstream image for Hybird type operator 2066619 - The GitCommit of the oc-mirror version is not correct 2066665 - [ibm-vpc-block] Unable to change default storage class 2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles 2066754 - Cypress reports for core tests are not captured 2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user 2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies 2066886 - openshift-apiserver pods never going NotReady 2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp 2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp 2066923 - No rule to make target 'docker-push' when building the SRO bundle 2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK 2067004 - CMO contains grafana image though grafana is removed 2067005 - Prometheus rule contains grafana though grafana is removed 2067062 - should update prometheus-operator resources version 2067064 - RoleBinding in Developer Console is dropping all subjects when editing 2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole 2067180 - Missing i18n translations 2067298 - Console 4.10 operand form refresh 2067312 - PPT event source is lost when received by the consumer 2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25 2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25 2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling 2068115 - resource tab extension fails to show up 2068148 - [4.11] /etc/redhat-release symlink is broken 2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator 2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab 2068490 - OLM descriptors integration test failing 2068538 - Crashloop back-off popover visual spacing defects 2068601 - Potential etcd inconsistent revision and data occurs 2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs 2068908 - Manual blog link change needed 2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35 2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state 2069181 - Disabling community tasks is not working 2069198 - Flaky CI test in e2e/pipeline-ci 2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog 2069312 - extend rest mappings with 'job' definition 2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services 2069577 - ConsolePlugin example proxy authorize is wrong 2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes 2069632 - Not able to download previous container logs from console 2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap 2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor, os and workload 2069685 - UI crashes on load if a pinned resource model does not exist 2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway" 2069740 - On-prem loadbalancer ports conflict with kube node port range 2069760 - In developer perspective divider does not show up in navigation 2069904 - Sync upstream 1.18.1 downstream 2069914 - Application Launcher groupings are not case-sensitive 2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2070000 - Add warning alerts for installing standalone k8s-nmstate 2070020 - InContext doesn't work for Event Sources 2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured 2070160 - Copy-to-clipboard and

 elements cause display issues for ACM dynamic plugins
2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's
2070181 - [MAPO] serverGroupName ignored
2070457 - Image vulnerability Popover overflows from the visible area
2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes
2070703 - some ipv6 network policy tests consistently failing
2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears
2070731 - details switch label is not clickable on add page
2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled
2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability
2070805 - ClusterVersion: could not download the update
2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update
2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled
2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci
2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes
2071019 - rebase vsphere csi driver 2.5
2071021 - vsphere driver has snapshot support missing
2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong
2071139 - Ingress pods scheduled on the same node
2071364 - All image building tests are broken with "            error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax
2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)
2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console
2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType
2071617 - remove Kubevirt extensions in favour of dynamic plugin
2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO
2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs
2071700 - v1 events show "Generated from" message without the source/reporting component
2071715 - Shows 404 on Environment nav in Developer console
2071719 - OCP Console global PatternFly overrides link button whitespace
2071747 - Link to documentation from the overview page goes to a missing link
2071761 - Translation Keys Are Not Namespaced
2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable
2071859 - ovn-kube pods spec.dnsPolicy should be Default
2071914 - cloud-network-config-controller 4.10.5:  Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name ""
2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates
2072106 - cluster-ingress-operator tests do not build on go 1.18
2072134 - Routes are not accessible within cluster from hostnet pods
2072139 - vsphere driver has permissions to create/update PV objects
2072154 - Secondary Scheduler operator panics
2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails
2072195 - machine api doesn't issue client cert when AWS DNS suffix missing
2072215 - Whereabouts ip-reconciler should be opt-in and not required
2072389 - CVO exits upgrade immediately rather than waiting for etcd backup
2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes
2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml
2072570 - The namespace titles for operator-install-single-namespace test keep changing
2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)
2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master
2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node
2072793 - Drop "Used Filesystem" from "Virtualization -> Overview"
2072805 - Observe > Dashboards: $__range variables cause PromQL query errors
2072807 - Observe > Dashboards: Missing panel.styles attribute for table panels causes JS error
2072842 - (release-4.11) Gather namespace names with overlapping UID ranges
2072883 - sometimes monitoring dashboards charts can not be loaded successfully
2072891 - Update gcp-pd-csi-driver to 1.5.1;
2072911 - panic observed in kubedescheduler operator
2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial
2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system
2072998 - update aws-efs-csi-driver to the latest version
2072999 - Navigate from logs of selected Tekton task instead of last one
2073021 - [vsphere] Failed to update OS on master nodes
2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 
2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 
2073176 - removing data in form does not remove data from yaml editor
2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists
2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 
2073373 - Update azure-disk-csi-driver to 1.16.0
2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig
2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning
2073436 - Update azure-file-csi-driver to v1.14.0
2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls
2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)
2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 
2073522 - Update ibm-vpc-block-csi-driver to v4.2.0
2073525 - Update vpc-node-label-updater to v4.1.2
2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled
2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW
2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses
2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies
2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring
2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet
2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary
2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well
2074084 - CMO metrics not visible in the OCP webconsole UI
2074100 - CRD filtering according to name broken
2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions
2074237 - oc new-app --image-stream flag behavior is unclear
2074243 - DefaultPlacement API allow empty enum value and remove default
2074447 - cluster-dashboard: CPU Utilisation iowait and steal
2074465 - PipelineRun fails in import from Git flow if "main" branch is default
2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled
2074475 - [e2e][automation] kubevirt plugin cypress tests fail
2074483 - coreos-installer doesnt work on Dell machines
2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes
2074585 - MCG standalone deployment page goes blank when the KMS option is enabled
2074606 - occm does not have permissions to annotate SVC objects
2074612 - Operator fails to install due to service name lookup failure
2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system
2074635 - Unable to start Web Terminal after deleting existing instance
2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records
2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver
2074710 - Transition to go-ovirt-client
2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab
2074767 - Metrics page show incorrect values due to metrics level config
2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in
2074902 - oc debug node/nodename ? chroot /host somecommand should exit with non-zero when the sub-command failed
2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)
2075024 - Metal upgrades permafailing on metal3 containers crash looping
2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP
2075091 - Symptom Detection.Undiagnosed panic detected in pod
2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)
2075149 - Trigger Translations When Extensions Are Updated
2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors
2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured
2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work
2075478 - Bump documentationBaseURL to 4.11
2075491 - nmstate operator cannot be upgraded on SNO
2075575 - Local Dev Env - Prometheus 404 Call errors spam the console
2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled
2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow
2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade
2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties
2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects
2075778 - Fix failing TestGetRegistrySamples test
2075873 - Bump recommended FCOS to 35.20220327.3.0
2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect
2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs
2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object
2076290 - PTP operator readme missing documentation on BC setup via PTP config
2076297 - Router process ignores shutdown signal while starting up
2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable
2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap
2076393 - [VSphere] survey fails to list datacenters
2076521 - Nodes in the same zone are not updated in the right order
2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast'
2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10
2076553 - Project access view replace group ref with user ref when updating their Role
2076614 - Missing Events component from the SDK API
2076637 - Configure metrics for vsphere driver to be reported
2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters
2076793 - CVO exits upgrade immediately rather than waiting for etcd backup
2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours
2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26
2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it
2076975 - Metric unset during static route conversion in configure-ovs.sh
2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI
2077050 - OCP should default to pd-ssd disk type on GCP
2077150 - Breadcrumbs on a few screens don't have correct top margin spacing
2077160 - Update owners for openshift/cluster-etcd-operator
2077357 - [release-4.11] 200ms packet delay with OVN controller turn on
2077373 - Accessibility warning on developer perspective
2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge)
2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager"
2077497 - Rebase etcd to 3.5.3 or later
2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API
2077599 - OCP should alert users if they are on vsphere version <7.0.2
2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster
2077797 - LSO pods don't have any resource requests
2077851 - "make vendor" target is not working
2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays
2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region
2078013 - drop multipathd.socket workaround
2078375 - When using the wizard with template using data source the resulting vm use pvc source
2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label
2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema:  ERROR fork/exec
2078526 - Multicast breaks after master node reboot/sync
2078573 - SDN CNI -Fail to create nncp when vxlan is up
2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 
2078698 - search box may not completely remove content
2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)
2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 
2078781 - PreflightValidation does not handle multiarch images
2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress
2078875 - OpenShift Installer fail to remove Neutron ports
2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml
2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema"
2078945 - Ensure only one apiserver-watcher process is active on a node. 
2078954 - network-metrics-daemon makes costly global pod list calls scaling per node
2078969 - Avoid update races between old and new NTO operands during cluster upgrades
2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned
2079062 - Test for console demo plugin toast notification needs to be increased for ci testing
2079197 - [RFE] alert when more than one default storage class is detected
2079216 - Partial cluster update reference doc link returns 404
2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity
2079315 - (release-4.11) Gather ODF config data with Insights
2079422 - Deprecated 1.25 API call
2079439 - OVN Pods Assigned Same IP Simultaneously
2079468 - Enhance the waitForIngressControllerCondition for better CI results
2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster
2079610 - Opeatorhub status shows errors
2079663 - change default image features in RBD storageclass
2079673 - Add flags to disable migrated code
2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config
2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster
2079788 - Operator restarts while applying the acm-ice example
2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade
2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade
2079805 - Secondary scheduler operator should comply to restricted pod security level
2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding
2079837 - [RFE] Hub/Spoke example with daemonset
2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation
2079845 - The Event Sinks catalog page now has a blank space on the left
2079869 - Builds for multiple kernel versions should be ran in parallel when possible
2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices
2079961 - The search results accordion has no spacing between it and the side navigation bar. 
2079965 - [rebase v1.24]  [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s]
2080054 - TAGS arg for installer-artifacts images is not propagated to build images
2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status
2080197 - etcd leader changes produce test churn during early stage of test
2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build
2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080379 - Group all e2e tests as parallel or serial
2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application
2080416 - oc bash-completion problem
2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load
2080446 - Sync ironic images with latest bug fixes packages
2080679 - [rebase v1.24] [sig-cli] test failure
2080681 - [rebase v1.24]  [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]
2080687 - [rebase v1.24]  [sig-network][Feature:Router] tests are failing
2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously
2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod"
2080976 - Avoid hooks config maps when hooks are empty
2081012 - [rebase v1.24]  [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]
2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available
2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources
2081062 - Unrevert RHCOS back to 8.6
2081067 - admin dev-console /settings/cluster should point out history may be excerpted
2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network
2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error
2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed
2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount
2081119 - oc explain output of default overlaySize is outdated
2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects
2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames
2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field
2081562 - lifecycle.posStart hook does not have network connectivity. 
2081685 - Typo in NNCE Conditions
2081743 - [e2e] tests failing
2081788 - MetalLB: the crds are not validated until metallb is deployed
2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM
2081895 - Use the managed resource (and not the manifest) for resource health checks
2081997 - disconnected insights operator remains degraded after editing pull secret
2082075 - Removing huge amount of ports takes a lot of time. 
2082235 - CNO exposes a generic apiserver that apparently does nothing
2082283 - Transition to new oVirt Terraform provider
2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni
2082380 - [4.10.z] customize wizard is crashed
2082403 - [LSO] No new build local-storage-operator-metadata-container created
2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully
2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS
2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys"
2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml
2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform
2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return
2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging
2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset
2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument
2082763 - Cluster install stuck on the applying for operatorhub "cluster"
2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal
2083153 - Unable to use application credentials for Manila PVC creation on OpenStack
2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters
2083219 - DPU network operator doesn't deal with c1... inteface names
2083237 - [vsphere-ipi] Machineset scale up process delay
2083299 - SRO does not fetch mirrored DTK images in disconnected clusters
2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified
2083451 - Update external serivces URLs to console.redhat.com
2083459 - Make numvfs > totalvfs error message more verbose
2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error
2083514 - Operator ignores managementState Removed
2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service
2083756 - Linkify not upgradeable message on ClusterSettings page
2083770 - Release image signature manifest filename extension is yaml
2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities
2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors
2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form
2083999 - "--prune-over-size-limit" is not working as expected
2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11
2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface
2084124 - The Update cluster modal includes a broken link
2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests
2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run
2084280 - GCP API Checks Fail if non-required APIs are not enabled
2084288 - "alert/Watchdog must have no gaps or changes" failing after bump
2084292 - Access to dashboard resources is needed in dynamic plugin SDK
2084331 - Resource with multiple capabilities included unless all capabilities are disabled
2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 
2084438 - Change Ping source spec.jsonData (deprecated) field  to spec.data
2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster
2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri
2084463 - 5 control plane replica tests fail on ephemeral volumes
2084539 - update azure arm templates to support customer provided vnet
2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail
2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character
2084615 - Add to navigation option on search page is not properly aligned
2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass
2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10
2085187 - installer-artifacts fails to build with go 1.18
2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse
2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated
2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster
2085407 - There is no Edit link/icon for labels on Node details page
2085721 - customization controller image name is wrong
2086056 - Missing doc for OVS HW offload
2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11
2086092 - update kube to v.24
2086143 - CNO uses too much memory
2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks
2086301 - kubernetes nmstate pods are not running after creating instance
2086408 - Podsecurity violation error getting logged for  externalDNS operand pods during deployment
2086417 - Pipeline created from add flow has GIT Revision as required field
2086437 - EgressQoS CRD not available
2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment
2086459 - oc adm inspect fails when one of resources not exist
2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long
2086465 - External identity providers should log login attempts in the audit trail
2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance'
2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase
2086505 - Update oauth-server images to be consistent with ART
2086519 - workloads must comply to restricted security policy
2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode
2086542 - Cannot create service binding through drag and drop
2086544 - ovn-k master daemonset on hypershift shouldn't log token
2086546 - Service binding connector is not visible in the dark mode
2086718 - PowerVS destroy code does not work
2086728 - [hypershift] Move drain to controller
2086731 - Vertical pod autoscaler operator needs a 4.11 bump
2086734 - Update csi driver images to be consistent with ART
2086737 - cloud-provider-openstack rebase to kubernetes v1.24
2086754 - Cluster resource override operator needs a 4.11 bump
2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory
2086791 - Azure: Validate UltraSSD instances in multi-zone regions
2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway
2086936 - vsphere ipi should use cores by default instead of sockets
2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert
2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel
2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror
2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified
2086972 - oc-mirror does not error invalid metadata is passed to the describe command
2086974 - oc-mirror does not work with headsonly for operator 4.8
2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by oc image mirror command
2087026 - DTK's imagestream is missing from OCP 4.11 payload
2087037 - Cluster Autoscaler should use K8s 1.24 dependencies
2087039 - Machine API components should use K8s 1.24 dependencies
2087042 - Cloud providers components should use K8s 1.24 dependencies
2087084 - remove unintentional nic support
2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update
2087114 - Add simple-procfs-kmod in modprobe example in README.md
2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub
2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization
2087556 - Failed to render DPU ovnk manifests
2087579 - --keep-manifest-list=true does not work for oc adm release new , only pick up the linux/amd64 manifest from the manifest list
2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler
2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile
2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile
2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile
2087764 - Rewrite the registry backend will hit error
2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again
2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services
2087942 - CNO references images that are divergent from ART
2087944 - KafkaSink Node visualized incorrectly
2087983 - remove etcd_perf before restore
2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log
2088130 - oc-mirror init does not allow for automated testing
2088161 - Match dockerfile image name with the name used in the release repo
2088248 - Create HANA VM does not use values from customized HANA templates
2088304 - ose-console: enable source containers for open source requirements
2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install
2088431 - AvoidBuggyIPs field of addresspool should be removed
2088483 - oc adm catalog mirror returns 0 even if there are errors
2088489 - Topology list does not allow selecting an application group anymore (again)
2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource
2088535 - MetalLB: Enable debug log level for downstream CI
2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings would violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long
2088634 - oc-mirror does not fail when catalog is invalid
2088660 - Nutanix IPI installation inside container failed
2088663 - Better to change the default value of --max-per-registry to 6
2089163 - NMState CRD out of sync with code
2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster
2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting
2089254 - CAPI operator: Rotate token secret if its older than 30 minutes
2089276 - origin tests for egressIP and azure fail
2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix
2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths
2089334 - All cloud providers should use service account credentials
2089344 - Failed to deploy simple-kmod
2089350 - Rebase sdn to 1.24
2089387 - LSO not taking mpath. ignoring device
2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13  crashloops on machine-approver
2089396 - oc-mirror does not show pruned image plan
2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines
2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver
2089488 - Special resources are missing the managementState field
2089563 - Update Power VS MAPI to use api's from openshift/api repo
2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster
2089675 - Could not move Serverless Service without Revision (or while starting?)
2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster
2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks
2089687 - alert message of MCDDrainError needs to be updated for new drain controller
2089696 - CR reconciliation is stuck in daemonset lifecycle
2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply
2089719 - acm-simple-kmod fails to build
2089720 - [Hypershift] ICSP doesn't work for the guest cluster
2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive
2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages
2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances
2089805 - Config duration metrics aren't exposed
2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete
2089909 - PTP e2e testing not working on SNO cluster
2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist
2089930 - Bump OVN to 22.06
2089933 - Pods do not post readiness status on termination
2089968 - Multus CNI daemonset should use hostPath mounts with type: directory
2089973 - bump libs to k8s 1.24 for OCP 4.11
2089996 - Unnecessary yarn install runs in e2e tests
2090017 - Enable source containers to meet open source requirements
2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network
2090092 - Will hit error if specify the channel not the latest
2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready
2090178 - VM SSH command generated by UI points at api VIP
2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase
2090236 - Only reconcile annotations and status for clusters
2090266 - oc adm release extract is failing on mutli arch image
2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster
2090336 - Multus logging should be disabled prior to release
2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 
2090358 - Initiating drain log message is displayed before the drain actually starts
2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials
2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]
2090430 - gofmt code
2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool
2090437 - Bump CNO to k8s 1.24
2090465 - golang version mismatch
2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type
2090537 - failure in ovndb migration when db is not ready in HA mode
2090549 - dpu-network-operator shall be able to run on amd64 arch platform
2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD
2090627 - Git commit and branch are empty in MetalLB log
2090692 - Bump to latest 1.24 k8s release
2090730 - must-gather should include multus logs. 
2090731 - nmstate deploys two instances of webhook on a single-node cluster
2090751 - oc image mirror skip-missing flag does not skip images
2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers
2090774 - Add Readme to plugin directory
2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert
2090809 - gm.ClockClass  invalid syntax parse error in linux ptp daemon logs
2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition"
2090819 - oc-mirror does not catch invalid registry input when a namespace is specified
2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24
2090829 - Bump OpenShift router to k8s 1.24
2090838 - Flaky test: ignore flapping host interface 'tunbr'
2090843 - addLogicalPort() performance/scale optimizations
2090895 - Dynamic plugin nav extension "startsWith" property does not work
2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined
2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError
2091029 - Cancel rollout action only appears when rollout is completed
2091030 - Some BM may fail booting with default bootMode strategy
2091033 - [Descheduler]: provide ability to override included/excluded namespaces
2091087 - ODC Helm backend Owners file needs updates
2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091167 - IPsec runtime enabling not work in hypershift
2091218 - Update Dev Console Helm backend to use helm 3.9.0
2091433 - Update AWS instance types
2091542 - Error Loading/404 not found page shown after clicking "Current namespace only"
2091547 - Internet connection test with proxy permanently fails
2091567 - oVirt CSI driver should use latest go-ovirt-client
2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled
2091599 - PTP Dual Nic  | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric
2091603 - WebSocket connection restarts when switching tabs in WebTerminal
2091613 - simple-kmod fails to build due to missing KVC
2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it
2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets"
2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec'
2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options
2091854 - clusteroperator status filter doesn't match all values in Status column
2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10
2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later
2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb
2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller
2092041 - Bump cluster-dns-operator to k8s 1.24
2092042 - Bump cluster-ingress-operator to k8s 1.24
2092047 - Kube 1.24 rebase for cloud-network-config-controller
2092137 - Search doesn't show all entries when name filter is cleared
2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16
2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results
2092408 - Wrong icon is used in the virtualization overview permissions card
2092414 - In virtualization overview "running vm per templates" template list can be improved
2092442 - Minimum time between drain retries is not the expected one
2092464 - marketplace catalog defaults to v4.10
2092473 - libovsdb performance backports
2092495 - ovn: use up to 4 northd threads in non-SNO clusters
2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass
2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins
2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster
2092579 - Don't retry pod deletion if objects are not existing
2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks
2092703 - Incorrect mount propagation information in container status
2092815 - can't delete the unwanted image from registry by oc-mirror
2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds
2092867 - make repository name unique in acm-ice/acm-simple-kmod examples
2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes
2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os
2092889 - Incorrect updating of EgressACLs using direction "from-lport"
2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)
2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)
2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)
2092928 - CVE-2022-26945 go-getter: command injection vulnerability
2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing
2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs
2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit
2093047 - Dynamic Plugins: Generated API markdown duplicates checkAccess and useAccessReview doc
2093126 - [4.11] Bootimage bump tracker
2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade
2093288 - Default catalogs fails liveness/readiness probes
2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable
2093368 - Installer orphans FIPs created for LoadBalancer Services on cluster destroy
2093396 - Remove node-tainting for too-small MTU
2093445 - ManagementState reconciliation breaks SR
2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters
2093462 - Ingress Operator isn't reconciling the ingress cluster operator object
2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again
2093593 - Import from Devfile shows configuration options that shoudn't be there
2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding
2093600 - Project access tab should apply new permissions before it delete old ones
2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content)
2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24
2093797 - 'oc registry login' with serviceaccount function need update
2093819 - An etcd member for a new machine was never added to the cluster
2093930 - Gather console helm install  totals metric
2093957 - Oc-mirror write dup metadata to registry backend
2093986 - Podsecurity violation error getting logged for pod-identity-webhook
2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6
2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig
2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips
2094039 - egressIP panics with nil pointer dereference
2094055 - Bump coreos-installer for s390x Secure Execution
2094071 - No runbook created for SouthboundStale alert
2094088 - Columns in NBDB may never be updated by OVNK
2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator
2094152 - Alerts in the virtualization overview status card aren't filtered
2094196 - Add default and validating webhooks for Power VS MAPI
2094227 - Topology: Create Service Binding should not be the last option (even under delete)
2094239 - custom pool Nodes with 0 nodes are always populated in progress bar
2094303 - If og is configured with sa, operator installation will be failed. 
2094335 - [Nutanix] - debug logs are enabled by default in machine-controller
2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform
2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration
2094525 - Allow automatic upgrades for efs operator
2094532 - ovn-windows CI jobs are broken
2094675 - PTP Dual Nic  | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run
2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character
2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s
2094801 - Kuryr controller keep restarting when handling IPs with leading zeros
2094806 - Machine API oVrit component should use K8s 1.24 dependencies
2094816 - Kuryr controller restarts when over quota
2094833 - Repository overview page does not show default PipelineRun template for developer user
2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state
2094864 - Rebase CAPG to latest changes
2094866 - oc-mirror does not always delete all manifests associated with an image during pruning
2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing
2094902 - Fix installer cross-compiling
2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters
2095049 - managed-csi StorageClass does not create PVs
2095071 - Backend tests fails after devfile registry update
2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh
2095110 - [ovn] northd container termination script must use bash
2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp
2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance
2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic
2095231 - Kafka Sink sidebar in topology is empty
2095247 - Event sink form doesn't show channel as sink until app is refreshed
2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node
2095256 - Samples Owner needs to be Updated
2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection'
2095362 - oVirt CSI driver operator should use latest go-ovirt-client
2095574 - e2e-agnostic CI job fails
2095687 - Debug Container shown for build logs and on click ui breaks
2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster
2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns
2095756 - CNO panics with concurrent map read/write
2095772 - Memory requests for ovnkube-master containers are over-sized
2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB
2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized
2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode
2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6
2096315 - NodeClockNotSynchronising alert's severity should be critical
2096350 - Web console doesn't display webhook errors for upgrades
2096352 - Collect whole journal in gather
2096380 - acm-simple-kmod references deprecated KVC example
2096392 - Topology node icons are not properly visible in Dark mode
2096394 - Add page Card items background color does not match with column background color in Dark mode
2096413 - br-ex not created due to default bond interface having a different mac address than expected
2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile
2096605 - [vsphere] no validation checking for diskType
2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups
2096855 - oc adm release new failed with error when use  an existing  multi-arch release image as input
2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider
2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import
2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology
2097043 - No clean way to specify operand issues to KEDA OLM operator
2097047 - MetalLB:  matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries
2097067 - ClusterVersion history pruner does not always retain initial completed update entry
2097153 - poor performance on API call to vCenter ListTags with thousands of tags
2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects
2097239 - Change Lower CPU limits for Power VS cloud
2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support
2097260 - openshift-install create manifests failed for Power VS platform
2097276 - MetalLB CI deploys the operator via manifests and not using the csv
2097282 - chore: update external-provisioner to the latest upstream release
2097283 - chore: update external-snapshotter to the latest upstream release
2097284 - chore: update external-attacher to the latest upstream release
2097286 - chore: update node-driver-registrar to the latest upstream release
2097334 - oc plugin help shows 'kubectl'
2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11
2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook
2097454 - Placeholder bug for OCP 4.11.0 metadata release
2097503 - chore: rebase against latest external-resizer
2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading
2097607 - Add Power VS support to Webhooks tests in actuator e2e test
2097685 - Ironic-agent can't restart because of existing container
2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1
2097810 - Required Network tools missing for Testing e2e PTP
2097832 - clean up unused IPv6DualStackNoUpgrade feature gate
2097940 - openshift-install destroy cluster traps if vpcRegion not specified
2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing
2098172 - oc-mirror does not validatethe registry in the storage config
2098175 - invalid license in python-dataclasses-0.8-2.el8 spec
2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file
2098242 - typo in SRO specialresourcemodule
2098243 - Add error check to Platform create for Power VS
2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device
2098508 - Control-plane-machine-set-operator report panic
2098610 - No need to check the push permission with ?manifests-only option
2099293 - oVirt cluster API provider should use latest go-ovirt-client
2099330 - Edit application grouping is shown to user with view only access in a cluster
2099340 - CAPI e2e tests for AWS are missing
2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump
2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups
2099528 - Layout issue: No spacing in delete modals
2099561 - Prometheus returns HTTP 500 error on /favicon.ico
2099582 - Format and update Repository overview content
2099611 - Failures on etcd-operator watch channels
2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image
2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)
2099668 - KubeControllerManager should degrade when GC stops working
2099695 - Update CAPG after rebase
2099751 - specialresourcemodule stacktrace while looping over build status
2099755 - EgressIP node's mgmtIP reachability configuration option
2099763 - Update icons for event sources and sinks in topology, Add page, and context menu
2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]
2099821 - exporting a pointer for the loop variable
2099875 - The speaker won't start if there's another component on the host listening on 8080
2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing
2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file
2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster
2100001 - Sync upstream v1.22.0 downstream
2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator
2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment
2100038 - failure to update special-resource-lifecycle table during update Event
2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump
2100138 - release info --bugs has no differentiator between Jira and Bugzilla
2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation
2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar
2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied"
2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile
2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8
2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running
2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field
2100507 - Remove redundant log lines from obj_retry.go
2100536 - Update API to allow EgressIP node reachability check
2100601 - Update CNO to allow EgressIP node reachability check
2100643 - [Migration] [GCP]OVN can not rollback to SDN
2100644 - openshift-ansible FTBFS on RHEL8
2100669 - Telemetry should not log the full path if it contains a username
2100749 - [OCP 4.11] multipath support needs multipath modules
2100825 - Update machine-api-powervs go modules to latest version
2100841 - tiny openshift-install usability fix for setting KUBECONFIG
2101460 - An etcd member for a new machine was never added to the cluster
2101498 - Revert Bug 2082599: add upper bound to number of failed attempts
2102086 - The base image is still 4.10 for operator-sdk 1.22
2102302 - Dummy bug for 4.10 backports
2102362 - Valid regions should be allowed in GCP install config
2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster
2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption
2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install
2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root
2102947 - [VPA] recommender is logging errors for pods with init containers
2103053 - [4.11] Backport Prow CI improvements from master
2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly
2103080 - br-ex not created due to default bond interface having a different mac address than expected
2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces
2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path'
2103749 - MachineConfigPool is not getting updated
2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec
2104432 - [dpu-network-operator] Updating images to be consistent with ART
2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack
2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0
2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce
2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes
2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference"
2104727 - Bootstrap node should honor http proxy
2104906 - Uninstall fails with Observed a panic: runtime.boundsError
2104951 - Web console doesn't display webhook errors for upgrades
2104991 - Completed pods may not be correctly cleaned up
2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds
2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied
2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history
2105167 - BuildConfig throws error when using a label with a / in it
2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial
2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator
2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 
2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18
2106051 - Unable to deploy acm-ice using latest SRO 4.11 build
2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]
2106062 - [4.11] Bootimage bump tracker
2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc"
2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls
2106313 - bond-cni: backport bond-cni GA items to 4.11
2106543 - Typo in must-gather release-4.10
2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI
2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device
2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted
2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing
2107501 - metallb greenwave tests failure
2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found"
2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade
2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference
2108686 - rpm-ostreed: start limit hit easily
2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate
2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations
2111055 - dummy bug for 4.10.z bz2110938

  1. References:

https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:

An update is now available for OpenShift Logging 5.3. Description:

Openshift Logging Bug Fix Release (5.3.0)

Security Fix(es):

  • golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):

1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1168 - Disable hostname verification in syslog TLS settings LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd LOG-1375 - ssl_ca_cert should be optional LOG-1378 - CLO should support sasl_plaintext(Password over http) LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate LOG-1494 - Syslog output is serializing json incorrectly LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing LOG-1735 - Regression introducing flush_at_shutdown LOG-1774 - The collector logs should be excluded in fluent.conf LOG-1776 - fluentd total_limit_size sets value beyond available space LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL LOG-1903 - Fix the Display of ClusterLogging type in OLM LOG-1911 - CLF API changes to Opt-in to multiline error detection LOG-1918 - Alert FluentdNodeDown always firing LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding

6

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1459",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "ontap select deploy administration utility",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "enterprise linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "8.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.7"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.7"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      },
      {
        "model": "libwebp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "webmproject",
        "version": "1.0.1"
      },
      {
        "model": "libwebp",
        "scope": null,
        "trust": 0.8,
        "vendor": "the webm",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "ontap select deploy administration utility",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "red hat enterprise linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30ec\u30c3\u30c9\u30cf\u30c3\u30c8",
        "version": null
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "165296"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2020-36331",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.4,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-36331",
            "impactScore": 4.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.4,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "VHN-391910",
            "impactScore": 4.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.1,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-36331",
            "impactScore": 5.2,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.1,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-36331",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-36331",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-36331",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202105-1382",
            "trust": 0.6,
            "value": "CRITICAL"
          },
          {
            "author": "VULHUB",
            "id": "VHN-391910",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-36331",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A flaw was found in libwebp in versions before 1.0.1. An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability. libwebp Is vulnerable to an out-of-bounds read.Information is obtained and denial of service (DoS) It may be put into a state. libwebp is an encoding and decoding library for the WebP image format. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-4930-1                   security@debian.org\nhttps://www.debian.org/security/                       Moritz Muehlenhoff\nJune 10, 2021                         https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : libwebp\nCVE ID         : CVE-2018-25009 CVE-2018-25010 CVE-2018-25011 CVE-2018-25013 \n                 CVE-2018-25014 CVE-2020-36328 CVE-2020-36329 CVE-2020-36330 \n                 CVE-2020-36331 CVE-2020-36332\n\nMultiple vulnerabilities were discovered in libwebp, the implementation\nof the WebP image format, which could result in denial of service, memory\ndisclosure or potentially the execution of arbitrary code if malformed\nimages are processed. \n\nFor the stable distribution (buster), these problems have been fixed in\nversion 0.6.1-2+deb10u1. \n\nWe recommend that you upgrade your libwebp packages. \n\nFor the detailed security status of libwebp please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/libwebp\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmDCfg0ACgkQEMKTtsN8\nTjaaKBAAqMJfe5aH4Gh14SpB7h2S5JJUK+eo/aPo1tXn7BoLiF4O5g05+McyUOdE\nHI9ibolUfv+HoZlCDC93MBJvopWgd1/oqReHML5n2GXPBESYXpRstL04qwaRqu9g\nAvofhX88EwHefTXmljVTL4W1KgMJuhhPxVLdimxoqd0/hjagZtA7B7R05khigC5k\nnHMFoRogSPjI9H4vI2raYaOqC26zmrZNbk/CRVhuUbtDOG9qy9okjc+6KM9RcbXC\nha++EhrGXPjCg5SwrQAZ50nW3Jwif2WpSeULfTrqHr2E8nHGUCHDMMtdDwegFH/X\nFK0dVaNPgrayw1Dji+fhBQz3qR7pl/1DK+gsLtREafxY0+AxZ57kCi51CykT/dLs\neC4bOPaoho91KuLFrT+X/AyAASS/00VuroFJB4sWQUvEpBCnWPUW1m3NvjsyoYuj\n0wmQMVM5Bb/aYuWAM+/V9MeoklmtIn+OPAXqsVvLxdbB0GScwJV86/NvsN6Nde6c\ntwImfMCK1V75FPrIsxx37M52AYWvALgXbWoVi4aQPyPeDerQdgUPL1FzTGzem0NQ\nPnXhuE27H/pJz79DosW8md0RFr+tfPgZ8CeTirXSUUXFiqhcXR/w1lqN2vlmfm8V\ndmwgzvu9A7ZhG++JRqbbMx2D+NS4coGgRdA7XPuRrdNKniRIDhQ=\n=pN/j\n-----END PGP SIGNATURE-----\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.3 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2019088 - \"MigrationController\" CR displays syntax error when unquiescing applications\n2021666 - Route name longer than 63 characters causes direct volume migration to fail\n2021668 - \"MigrationController\" CR ignores the \"cluster_subdomain\" value for direct volume migration routes\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n2024966 - Manifests not used by Operator Lifecycle Manager must be removed from the MTC 1.6 Operator image\n2027196 - \"migration-controller\" pod goes into \"CrashLoopBackoff\" state if an invalid registry route is entered on the \"Clusters\" page of the web console\n2027382 - \"Copy oc describe/oc logs\" window does not close automatically after timeout\n2028841 - \"rsync-client\" container fails during direct volume migration with \"Address family not supported by protocol\" error\n2031793 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"includedResources\" resource\n2039852 - \"migration-controller\" pod goes into \"CrashLoopBackOff\" state if \"MigPlan\" CR contains an invalid \"destMigClusterRef\" or \"srcMigClusterRef\"\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID:       RHSA-2022:5069-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:5069\nIssue date:        2022-08-10\nCVE Names:         CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n                   CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n                   CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n                   CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n                   CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n                   CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n                   CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n                   CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n                   CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n                   CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n                   CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n                   CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n                   CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n                   CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n                   CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n                   CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n                   CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n                   CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n                   CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n                   CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n                   CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n                   CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n                   CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n                   CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n                   CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n                   CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n                   CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n                   CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n                   CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n                   CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n                   CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n                   CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n                   CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n                   CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures -  s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that  setup of multiple credentials for a single registry which can be distinguished by their path  will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and  ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to  attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to  `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys  on change of PTPConfig  generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error  \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1   and \" ovndbchecker.go:118] unable to turn on memory       trimming for SB DB, stderr \"  , cluster  unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] -  key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci:  SubscriptionDoesNotSupportZone  - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD]  missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the  installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created  with a  wrong api version batch/v1beta1 when created  via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \"            error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5:  Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema:  ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24]  [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24]  [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24]  [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24]  [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field  to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for  externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13  crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass  invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic  | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install  totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic  | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use  an existing  multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB:  matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nAn update is now available for OpenShift Logging 5.3. Description:\n\nOpenshift Logging Bug Fix Release (5.3.0)\n\nSecurity Fix(es):\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should  be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "db": "PACKETSTORM",
        "id": "169076"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "165296"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-36331",
        "trust": 4.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165288",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164842",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165287",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3977",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2102",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1965",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4254",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2485.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1880",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3905",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1914",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3789",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0245",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1959",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4229",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021072216",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021061301",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021060725",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "163645",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-391910",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-36331",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169076",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165286",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165296",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165631",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168042",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164967",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "db": "PACKETSTORM",
        "id": "169076"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "165296"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "id": "VAR-202105-1459",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:59:07.750000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Bug\u00a01956856",
        "trust": 0.8,
        "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html"
      },
      {
        "title": "libwebp Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=151881"
      },
      {
        "title": "Amazon Linux AMI: ALAS-2023-1740",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2023-1740"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2031",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2031"
      },
      {
        "title": "Debian Security Advisories: DSA-4930-1 libwebp -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6dad0021173658916444dfc89f8d2495"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-125",
        "trust": 1.1
      },
      {
        "problemtype": "Out-of-bounds read (CWE-125) [NVD Evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://security.netapp.com/advisory/ntap-20211112-0001/"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212601"
      },
      {
        "trust": 1.8,
        "url": "https://www.debian.org/security/2021/dsa-4930"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/jul/54"
      },
      {
        "trust": 1.8,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1956856"
      },
      {
        "trust": 1.8,
        "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00005.html"
      },
      {
        "trust": 1.8,
        "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00006.html"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.6,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3977"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1959"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165287/red-hat-security-advisory-2021-5127-05.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021060725"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/libwebp-five-vulnerabilities-35580"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2485.2"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1965"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021072216"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3789"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1914"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212601"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1880"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021061301"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163645/apple-security-advisory-2021-07-21-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2102"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164842/red-hat-security-advisory-2021-4231-04.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165288/red-hat-security-advisory-2021-5129-06.html"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-35522"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-35524"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14145"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-35521"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-20266"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-35523"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3778"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3796"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-44228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-20673"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.3,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-10001"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37136"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37137"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21409"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24504"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27777"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20239"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36158"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35448"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3635"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20284"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36386"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24586"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3348"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26140"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3487"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26146"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31440"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3732"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-0129"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3564"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-0427"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23133"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26144"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3679"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36312"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29368"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24588"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-29646"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-29155"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3489"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26139"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28971"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-14615"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26143"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3600"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26145"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33200"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-29650"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20194"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26147"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31916"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31829"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3573"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26141"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28950"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24587"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24503"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3659"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-41617"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/125.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/alas-2023-1740.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36332"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36328"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36329"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/libwebp"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25011"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5128"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5129"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20317"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43267"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5137"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1870"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3575"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30758"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-5727"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30665"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12973"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30689"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20847"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-18032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1801"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1765"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-4658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20847"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30795"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-5785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1788"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-5727"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30744"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21775"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21806"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27814"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36241"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30797"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27842"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21779"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3948"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27828"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-12973"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1844"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1871"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29338"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30734"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26926"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24870"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-1789"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30663"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0202"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28327"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32250"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27776"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26945"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38593"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-19131"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24921"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23648"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5069"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29162"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23772"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1621"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28736"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://10.0.0.7:2379"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22576"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3697"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1706"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28734"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30322"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3695"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23806"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1729"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29810"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4115"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30323"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4627"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "db": "PACKETSTORM",
        "id": "169076"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "165296"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "db": "PACKETSTORM",
        "id": "169076"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "165296"
      },
      {
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-05-21T00:00:00",
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "date": "2021-05-21T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "date": "2021-06-28T19:12:00",
        "db": "PACKETSTORM",
        "id": "169076"
      },
      {
        "date": "2021-12-15T15:20:33",
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "date": "2021-12-15T15:22:36",
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "date": "2021-12-15T15:27:05",
        "db": "PACKETSTORM",
        "id": "165296"
      },
      {
        "date": "2022-01-20T17:48:29",
        "db": "PACKETSTORM",
        "id": "165631"
      },
      {
        "date": "2022-08-10T15:56:22",
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "date": "2021-11-15T17:25:56",
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "date": "2021-05-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "date": "2022-01-27T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "date": "2021-05-21T17:15:08.397000",
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-391910"
      },
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-36331"
      },
      {
        "date": "2022-12-09T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      },
      {
        "date": "2022-01-27T08:46:00",
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      },
      {
        "date": "2023-01-09T16:41:59.350000",
        "db": "NVD",
        "id": "CVE-2020-36331"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "libwebp\u00a0 Out-of-bounds read vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2018-016579"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202105-1382"
      }
    ],
    "trust": 0.6
  }
}

VAR-201912-1854

Vulnerability from variot - Updated: 2025-12-22 22:57

An issue existed in the drawing of web page elements. The issue was addressed with improved logic. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * information leak * Falsification of information * Arbitrary code execution * Service operation interruption (DoS) * Privilege escalation * Authentication bypass. Apple macOS Catalina is a set of dedicated operating systems developed by Apple for Mac computers. WebKit is one of the web browser engine components. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may result in the disclosure of process memory. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.1 and iPadOS 13.1, tvOS 13, Safari 13.0.1, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. A user may be unable to delete browsing history items. Maliciously crafted web content may violate iframe sandboxing policy. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13.3, iCloud for Windows 10.9, iOS 13.3 and iPadOS 13.3, Safari 13.0.4, iTunes 12.10.3 for Windows, iCloud for Windows 7.16. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13.3, watchOS 6.1.1, iCloud for Windows 10.9, iOS 13.3 and iPadOS 13.3, Safari 13.0.4, iTunes 12.10.3 for Windows, iCloud for Windows 7.16. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13.3, iCloud for Windows 10.9, iOS 13.3 and iPadOS 13.3, Safari 13.0.4, iTunes 12.10.3 for Windows, iCloud for Windows 7.16. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. A malicious website may be able to cause a denial of service. This issue is fixed in iCloud for Windows 7.17, iTunes 12.10.4 for Windows, iCloud for Windows 10.9.2, tvOS 13.3.1, Safari 13.0.5, iOS 13.3.1 and iPadOS 13.3.1. A DOM object context may not have had a unique security origin. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to universal cross site scripting. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. An application may be able to read restricted memory. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. A remote attacker may be able to cause arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. A remote attacker may be able to cause arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to a cross site scripting attack. (CVE-2020-3902). In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-* 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state


  1. Gentoo Linux Security Advisory GLSA 202003-22

                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-07-1 macOS Catalina 10.15

macOS Catalina 10.15 is now available and addresses the following:

AMD Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8748: Lilang Wu and Moony Li of TrendMicro Mobile Security Research Team

apache_mod_php Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: Multiple issues in PHP Description: Multiple issues were addressed by updating to PHP version 7.3.8. CVE-2019-11041 CVE-2019-11042

CoreAudio Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: Processing a maliciously crafted movie may result in the disclosure of process memory Description: A memory corruption issue was addressed with improved validation. CVE-2019-8705: riusksk of VulWar Corp working with Trend Micro's Zero Day Initiative

Crash Reporter Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: The "Share Mac Analytics" setting may not be disabled when a user deselects the switch to share analytics Description: A race condition existed when reading and writing user preferences. CVE-2019-8757: William Cerniuk of Core Development, LLC

Intel Graphics Driver Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8758: Lilang Wu and Moony Li of Trend Micro

IOGraphics Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: A malicious application may be able to determine kernel memory layout Description: A logic issue was addressed with improved restrictions. CVE-2019-8755: Lilang Wu and Moony Li of Trend Micro

Kernel Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8717: Jann Horn of Google Project Zero

Kernel Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2019-8781: Linus Henze (pinauten.de)

Notes Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: A local user may be able to view a user's locked notes Description: The contents of locked notes sometimes appeared in search results. CVE-2019-8730: Jamie Blumberg (@jamie_blumberg) of Virginia Polytechnic Institute and State University

PDFKit Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: An attacker may be able to exfiltrate the contents of an encrypted PDF Description: An issue existed in the handling of links in encrypted PDFs. CVE-2019-8772: Jens Müller of Ruhr University Bochum, Fabian Ising of FH Münster University of Applied Sciences, Vladislav Mladenov of Ruhr University Bochum, Christian Mainka of Ruhr University Bochum, Sebastian Schinzel of FH Münster University of Applied Sciences, and Jörg Schwenk of Ruhr University Bochum

SharedFileList Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: A malicious application may be able to access recent documents Description: The issue was addressed with improved permissions logic. CVE-2019-8770: Stanislav Zinukhov of Parallels International GmbH

sips Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8701: Simon Huang(@HuangShaomang), Rong Fan(@fanrong1992) and pjf of IceSword Lab of Qihoo 360

UIFoundation Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: Processing a maliciously crafted text file may lead to arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2019-8745: riusksk of VulWar Corp working with Trend Micro's Zero Day Initiative

WebKit Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: Visiting a maliciously crafted website may reveal browsing history Description: An issue existed in the drawing of web page elements. CVE-2019-8769: Piérre Reimertz (@reimertz)

WebKit Available for: MacBook (Early 2015 and later), MacBook Air (Mid 2012 and later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and later), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro (Late 2013 and later) Impact: A user may be unable to delete browsing history items Description: "Clear History and Website Data" did not clear the history. CVE-2019-8768: Hugo S. Diaz (coldpointblue)

Additional recognition

Finder We would like to acknowledge Csaba Fitzl (@theevilbit) for their assistance.

Gatekeeper We would like to acknowledge Csaba Fitzl (@theevilbit) for their assistance.

Safari Data Importing We would like to acknowledge Kent Zoya for their assistance.

Simple certificate enrollment protocol (SCEP) We would like to acknowledge an anonymous researcher for their assistance.

Telephony We would like to acknowledge Phil Stokes from SentinelOne for their assistance.

Installation note:

macOS Catalina 10.15 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl2blu0ACgkQBz4uGe3y 0M1A7g//e9fSj7PMQLPpztkv54U3jAPgU5jxKEIeSxvImDLg94YFDH1RxiZ8UP+4 R8tb2vEi+gEV/MWHQyExunrUoxMc0szqFEEyTcA1nxUMTsYtmQNDVeMlv4nc9sOs n3Eh1wajdkmnBJoEzQoJfM7W09ND0eFcyr2ucnH7bZXQWkG4ZdJwgtCA0kdlcODK Y7730ZREKqt88cBKJMow0y2CyeCWK4E1yWD6OTx0Iqf2fZXNinZw/ViDQEOrULy0 Dydi9GF8BmTWNQfiRd9quYN3k0ETe3jMYv7SFwv3LV820OobvY0qlSOAucjkjcNe SKhbewe2MRo5EXCRVPYgVMW9elVFtjgSITr7B7a/u6NGUW2jhFj1EeonvOaKDUqu Kybq7oa3F4EY1hZRs288GzIFdV8osjwggAJ4AithJVEa8fhepS4Q9wIDsEHgkHZa /epkzfoXTRNBMC2qf87i1vbLSrN9qxegxHoGn/dVzz/p008m3AfKZmndZ6vRG0ac jv/lw1lhaKVKyusvix3MU5GVwZvGVqYuqfISp+uaJEBJ4nuUw4LKuzimCAjjCmnw CV2Mz9aZG1PX5KrfuYwEc/bw49ODnCW3KiaCD0XlO4MdtEDA9lYoUdmsCbnmMzIa rJ3xEcFpjOnJVVXLIWopXzIb23/5YaKctqcRScfmGpoHKRIkzQo= =ibLV -----END PGP SIGNATURE-----

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1854",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "mac os x",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.15"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 10.9 earlier"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.16 (includes aas 8.2) earlier"
      },
      {
        "model": "ios",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "12.4.4 earlier"
      },
      {
        "model": "ios",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3 earlier"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3 earlier"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "12.10.3 for windows earlier"
      },
      {
        "model": "macos catalina",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.15.2 earlier"
      },
      {
        "model": "macos high sierra",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.13.6 (security update 2019-007 not applied )"
      },
      {
        "model": "macos mojave",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.14.6 (security update 2019-002 not applied )"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.4 earlier"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.3 earlier"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "5.3.4 earlier"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.1.1 earlier"
      },
      {
        "model": "xcode",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "11.3 earlier"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_catalina",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_high_sierra",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_mojave",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:watchos",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:xcode",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple,Debian,Red Hat,Gentoo",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2019-8769",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8769",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160204",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8769",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:L/I:N/A:N",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8769",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201910-322",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160204",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8769",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "An issue existed in the drawing of web page elements. The issue was addressed with improved logic. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * information leak * Falsification of information * Arbitrary code execution * Service operation interruption (DoS) * Privilege escalation * Authentication bypass. Apple macOS Catalina is a set of dedicated operating systems developed by Apple for Mac computers. WebKit is one of the web browser engine components. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may result in the disclosure of process memory. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.1 and iPadOS 13.1, tvOS 13, Safari 13.0.1, iTunes for Windows 12.10.1, iCloud for Windows 10.7, iCloud for Windows 7.14. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. A user may be unable to delete browsing history items. Maliciously crafted web content may violate iframe sandboxing policy. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, watchOS 6.1, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13.3, iCloud for Windows 10.9, iOS 13.3 and iPadOS 13.3, Safari 13.0.4, iTunes 12.10.3 for Windows, iCloud for Windows 7.16. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13.3, watchOS 6.1.1, iCloud for Windows 10.9, iOS 13.3 and iPadOS 13.3, Safari 13.0.4, iTunes 12.10.3 for Windows, iCloud for Windows 7.16. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in tvOS 13.3, iCloud for Windows 10.9, iOS 13.3 and iPadOS 13.3, Safari 13.0.4, iTunes 12.10.3 for Windows, iCloud for Windows 7.16. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. A malicious website may be able to cause a denial of service. This issue is fixed in iCloud for Windows 7.17, iTunes 12.10.4 for Windows, iCloud for Windows 10.9.2, tvOS 13.3.1, Safari 13.0.5, iOS 13.3.1 and iPadOS 13.3.1. A DOM object context may not have had a unique security origin. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to universal cross site scripting. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. An application may be able to read restricted memory. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. A remote attacker may be able to cause arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. A remote attacker may be able to cause arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, watchOS 6.2, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to arbitrary code execution. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. Processing maliciously crafted web content may lead to a cross site scripting attack. (CVE-2020-3902). In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nBackground\n==========\n\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-07-1 macOS Catalina 10.15\n\nmacOS Catalina 10.15 is now available and addresses the following:\n\nAMD\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8748: Lilang Wu and Moony Li of TrendMicro Mobile Security\nResearch Team\n\napache_mod_php\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: Multiple issues in PHP\nDescription: Multiple issues were addressed by updating to PHP\nversion 7.3.8. \nCVE-2019-11041\nCVE-2019-11042\n\nCoreAudio\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: Processing a maliciously crafted movie may result in the\ndisclosure of process memory\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2019-8705: riusksk of VulWar Corp working with Trend Micro\u0027s Zero\nDay Initiative\n\nCrash Reporter\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: The \"Share Mac Analytics\" setting may not be disabled when a\nuser deselects the switch to share analytics\nDescription: A race condition existed when reading and writing user\npreferences. \nCVE-2019-8757: William Cerniuk of Core Development, LLC\n\nIntel Graphics Driver\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8758: Lilang Wu and Moony Li of Trend Micro\n\nIOGraphics\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: A malicious application may be able to determine kernel\nmemory layout\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2019-8755: Lilang Wu and Moony Li of Trend Micro\n\nKernel\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8717: Jann Horn of Google Project Zero\n\nKernel\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2019-8781: Linus Henze (pinauten.de)\n\nNotes\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: A local user may be able to view a user\u0027s locked notes\nDescription: The contents of locked notes sometimes appeared in\nsearch results. \nCVE-2019-8730: Jamie Blumberg (@jamie_blumberg) of Virginia\nPolytechnic Institute and State University\n\nPDFKit\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: An attacker may be able to exfiltrate the contents of an\nencrypted PDF\nDescription: An issue existed in the handling of links in encrypted\nPDFs. \nCVE-2019-8772: Jens M\u00fcller of Ruhr University Bochum, Fabian Ising\nof FH M\u00fcnster University of Applied Sciences, Vladislav Mladenov\nof Ruhr University Bochum, Christian Mainka of Ruhr University\nBochum, Sebastian Schinzel of FH M\u00fcnster University of Applied\nSciences, and J\u00f6rg Schwenk of Ruhr University Bochum\n\nSharedFileList\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: A malicious application may be able to access recent\ndocuments\nDescription: The issue was addressed with improved permissions logic. \nCVE-2019-8770: Stanislav Zinukhov of Parallels International GmbH\n\nsips\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8701: Simon Huang(@HuangShaomang), Rong Fan(@fanrong1992)\nand pjf of IceSword Lab of Qihoo 360\n\nUIFoundation\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: Processing a maliciously crafted text file may lead to\narbitrary code execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2019-8745: riusksk of VulWar Corp working with Trend Micro\u0027s\nZero Day Initiative\n\nWebKit\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: Visiting a maliciously crafted website may reveal browsing\nhistory\nDescription: An issue existed in the drawing of web page elements. \nCVE-2019-8769: Pi\u00e9rre Reimertz (@reimertz)\n\nWebKit\nAvailable for: MacBook (Early 2015 and later), MacBook Air (Mid 2012\nand later), MacBook Pro (Mid 2012 and later), Mac mini (Late 2012 and\nlater), iMac (Late 2012 and later), iMac Pro (all models), Mac Pro\n(Late 2013 and later)\nImpact: A user may be unable to delete browsing history items\nDescription: \"Clear History and Website Data\" did not clear the\nhistory. \nCVE-2019-8768: Hugo S. Diaz (coldpointblue)\n\nAdditional recognition\n\nFinder\nWe would like to acknowledge Csaba Fitzl (@theevilbit) for their\nassistance. \n\nGatekeeper\nWe would like to acknowledge Csaba Fitzl (@theevilbit) for their\nassistance. \n\nSafari Data Importing\nWe would like to acknowledge Kent Zoya for their assistance. \n\nSimple certificate enrollment protocol (SCEP)\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nTelephony\nWe would like to acknowledge Phil Stokes from SentinelOne for their\nassistance. \n\nInstallation note:\n\nmacOS Catalina 10.15 may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl2blu0ACgkQBz4uGe3y\n0M1A7g//e9fSj7PMQLPpztkv54U3jAPgU5jxKEIeSxvImDLg94YFDH1RxiZ8UP+4\nR8tb2vEi+gEV/MWHQyExunrUoxMc0szqFEEyTcA1nxUMTsYtmQNDVeMlv4nc9sOs\nn3Eh1wajdkmnBJoEzQoJfM7W09ND0eFcyr2ucnH7bZXQWkG4ZdJwgtCA0kdlcODK\nY7730ZREKqt88cBKJMow0y2CyeCWK4E1yWD6OTx0Iqf2fZXNinZw/ViDQEOrULy0\nDydi9GF8BmTWNQfiRd9quYN3k0ETe3jMYv7SFwv3LV820OobvY0qlSOAucjkjcNe\nSKhbewe2MRo5EXCRVPYgVMW9elVFtjgSITr7B7a/u6NGUW2jhFj1EeonvOaKDUqu\nKybq7oa3F4EY1hZRs288GzIFdV8osjwggAJ4AithJVEa8fhepS4Q9wIDsEHgkHZa\n/epkzfoXTRNBMC2qf87i1vbLSrN9qxegxHoGn/dVzz/p008m3AfKZmndZ6vRG0ac\njv/lw1lhaKVKyusvix3MU5GVwZvGVqYuqfISp+uaJEBJ4nuUw4LKuzimCAjjCmnw\nCV2Mz9aZG1PX5KrfuYwEc/bw49ODnCW3KiaCD0XlO4MdtEDA9lYoUdmsCbnmMzIa\nrJ3xEcFpjOnJVVXLIWopXzIb23/5YaKctqcRScfmGpoHKRIkzQo=\n=ibLV\n-----END PGP SIGNATURE-----\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "154768"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8769",
        "trust": 3.3
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99404393",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "154768",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.3758",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4087",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155096",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155068",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-160204",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8769",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "154768"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "id": "VAR-201912-1854",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:57:33.069000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "About the security content of Safari 13.0.4",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210792"
      },
      {
        "title": "About the security content of Xcode 11.3",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210796"
      },
      {
        "title": "Mac \u306b\u642d\u8f09\u3055\u308c\u3066\u3044\u308b macOS \u3092\u8abf\u3079\u308b",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT201260"
      },
      {
        "title": "About the security content of iOS 13.3 and iPadOS 13.3",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210785"
      },
      {
        "title": "About the security content of iCloud for Windows 10.9",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210794"
      },
      {
        "title": "About the security content of iOS 12.4.4",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210787"
      },
      {
        "title": "About the security content of iCloud for Windows 7.16 (includes AAS 8.2)",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210795"
      },
      {
        "title": "About the security content of macOS Catalina 10.15.2, Security Update 2019-002 Mojave, Security Update 2019-007 High Sierra",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210788"
      },
      {
        "title": "About the security content of iTunes 12.10.3 for Windows",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210793"
      },
      {
        "title": "About the security content of watchOS 6.1.1",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210789"
      },
      {
        "title": "About the security content of tvOS 13.3",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210790"
      },
      {
        "title": "About the security content of watchOS 5.3.4",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210791"
      },
      {
        "title": "Apple macOS Catalina WebKit Fixes for component security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=99026"
      },
      {
        "title": "Ubuntu Security Notice: webkit2gtk vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4178-1"
      },
      {
        "title": "Debian Security Advisories: DSA-4558-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=a98a6bdc1e45372a45a166582a9eb9bc"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      },
      {
        "title": "Threatpost",
        "trust": 0.1,
        "url": "https://threatpost.com/apple-tackles-a-dozen-bugs-in-catalina/148988/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-200",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210634"
      },
      {
        "trust": 1.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8717"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8757"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8701"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8730"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8770"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8745"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8748"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8772"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8758"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8781"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8755"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8705"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8701"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8745"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8770"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8705"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8748"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8772"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8707"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8755"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8781"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8717"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8757"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8719"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8758"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8726"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8763"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8730"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8768"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8625"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8733"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8769"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99404393/"
      },
      {
        "trust": 0.6,
        "url": "https://www.debian.org/security/2019/dsa-4558"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-il/ht210634"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210603"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155096/debian-security-advisory-4558-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/154768/apple-security-advisory-2019-10-07-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4087/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210634"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-four-vulnerabilities-30768"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155068/apple-security-advisory-2019-10-29-11.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.3758/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/160889/red-hat-security-advisory-2021-0050-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.4,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4178-1/"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11042"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11041"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/downloads/"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "154768"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "154768"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2019-10-08T19:59:26",
        "db": "PACKETSTORM",
        "id": "154768"
      },
      {
        "date": "2019-10-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "date": "2019-12-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "date": "2019-12-18T18:15:39.897000",
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160204"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8769"
      },
      {
        "date": "2022-03-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      },
      {
        "date": "2020-01-07T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      },
      {
        "date": "2024-11-21T04:50:26.563000",
        "db": "NVD",
        "id": "CVE-2019-8769"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Updates to product vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-012754"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "information disclosure",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-322"
      }
    ],
    "trust": 0.6
  }
}

VAR-201912-1848

Vulnerability from variot - Updated: 2025-12-22 22:53

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * * information leak * * User impersonation * * Arbitrary code execution * * UI Spoofing * * Insufficient access restrictions * * Service operation interruption (DoS) * * Privilege escalation * * Memory corruption * * Authentication bypass. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes prior to 12.10.2; Windows-based versions of iCloud prior to 7.15. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Description:

Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. STF then transmits the information to a centralized, receiving Red Hat OpenShift Container Platform (OCP) deployment for storage, retrieval, and monitoring. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-29-3 tvOS 13.2

tvOS 13.2 is now available and addresses the following:

Accounts Available for: Apple TV 4K and Apple TV HD Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at Technische Universität Darmstadt

App Store Available for: Apple TV 4K and Apple TV HD Impact: A local attacker may be able to login to the account of a previously logged in user without valid credentials. CVE-2019-8803: Kiyeon An, 차민규 (CHA Minkyu)

Audio Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8785: Ian Beer of Google Project Zero CVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure

AVEVideoEncoder Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure

File System Events Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8798: ABC Research s.r.o. working with Trend Micro's Zero Day Initiative

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to read restricted memory Description: A validation issue was addressed with improved input sanitization. CVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure

Kernel Available for: Apple TV 4K and Apple TV HD Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8782: Cheolung Lee of LINE+ Security Team CVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team CVE-2019-8808: found by OSS-Fuzz CVE-2019-8811: Soyeon Park of SSLab at Georgia Tech CVE-2019-8812: an anonymous researcher CVE-2019-8814: Cheolung Lee of LINE+ Security Team CVE-2019-8816: Soyeon Park of SSLab at Georgia Tech CVE-2019-8819: Cheolung Lee of LINE+ Security Team CVE-2019-8820: Samuel Groß of Google Project Zero CVE-2019-8821: Sergei Glazunov of Google Project Zero CVE-2019-8822: Sergei Glazunov of Google Project Zero CVE-2019-8823: Sergei Glazunov of Google Project Zero

WebKit Process Model Available for: Apple TV 4K and Apple TV HD Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2019-8815: Apple

Additional recognition

CFNetwork We would like to acknowledge Lily Chen of Google for their assistance.

Kernel We would like to acknowledge Jann Horn of Google Project Zero for their assistance.

WebKit We would like to acknowledge Dlive of Tencent's Xuanwu Lab and Zhiyi Zhang of Codesafe Team of Legendsec at Qi'anxin Group for their assistance.

Installation note:

Apple TV will periodically check for software updates. Alternatively, you may manually check for software updates by selecting "Settings -> System -> Software Update -> Update Software."

To check the current version of software, select "Settings -> General -> About."

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt c2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2DIQ/9 FQmnN+1/tdXaFFI1PtdJ9hgXONcdsi+D05mREDTX7v0VaLzChX/N3DccI00Z1uT5 VNKHRjInGYDZoO/UntzAWoZa+tcueaY23XhN9xTYrUlt1Ol1gIsaxTEgPtax4B9A PoqWb6S+oK1SHUxglGnlLtXkcyt3WHJ5iqan7BM9XX6dsriwgoBgKADpFi3FCXoa cFIvpoM6ZhxYyMPpxmMc1IRwgjDwOn2miyjkSaAONXw5R5YGRxSsjq+HkzYE3w1m m2NZElUB1nRmlyuU3aMsHUTxwAnfzryPiHRGUTcNZao39YBsyWz56sr3++g7qmnD uZZzBnISQpC6oJCWclw3UHcKHH+V0+1q059GHBoku6Xmkc5bPRnKdFgSf5OvyQUw XGjwL5UbGB5eTtdj/Kx5Rd/m5fFIUxVu7HB3bGQGhYHIc9iTdi9j3mCd3nOHCIEj Re07c084jl2Git4sH2Tva7tOqFyI2IyNVJ0LjBXO54fAC2mtFz3mkDFxCEzL5V92 O/Wct2T6OpYghzkrOOlUEAQJTbwJjTZBWsUubcOoJo6P9JUPBDJKB0ibAaCWrt9I 8OU5wRr3q0fTA3N/qdGGbQ/tgUwiMHGuqrHMv0XYGPfO5Qg5GuHpTYchZrP5nFwf ziQuQtO92b1FA4sDI+ue1sDIG84tPrkTAeLmveBjezc= =KsmX -----END PGP SIGNATURE-----

.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-* 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Bugs fixed (https://bugzilla.redhat.com/):

1207179 - Select items matching non existing pattern does not unselect already selected 1566027 - can't correctly compute contents size if hidden files are included 1569868 - Browsing samba shares using gvfs is very slow 1652178 - [RFE] perf-tool run on wayland 1656262 - The terminal's character display is unclear on rhel8 guest after installing gnome 1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled 1692536 - login screen shows after gnome-initial-setup 1706008 - Sound Effect sometimes fails to change to selected option. 1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. 1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined 1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly 1758891 - tracker-devel subpackage missing from el8 repos 1775345 - Rebase xdg-desktop-portal to 1.6 1778579 - Nautilus does not respect umask settings. 1779691 - Rebase xdg-desktop-portal-gtk to 1.6 1794045 - There are two different high contrast versions of desktop icons 1804719 - Update vte291 to 0.52.4 1805929 - RHEL 8.1 gnome-shell-extension errors 1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp 1814820 - No checkbox to install updates in the shutdown dialog 1816070 - "search for an application to open this file" dialog broken 1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution 1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1817143 - Rebase WebKitGTK to 2.28 1820759 - Include IO stall fixes 1820760 - Include IO fixes 1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening 1827030 - gnome-settings-daemon: subscription notification on CentOS Stream 1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content 1832347 - [Rebase] Rebase pipewire to 0.3.x 1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install 1837381 - Backport screen cast improvements to 8.3 1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version 1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6 1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113 1840080 - Can not control top bar menus via keys in Wayland 1840788 - [flatpak][rhel8] unable to build potrace as dependency 1843486 - Software crash after clicking Updates tab 1844578 - anaconda very rarely crashes at startup with a pygobject traceback 1846191 - usb adapters hotplug crashes gnome-shell 1847051 - JS ERROR: TypeError: area is null 1847061 - File search doesn't work under certain locales 1847062 - gnome-remote-desktop crash on QXL graphics 1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV 1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow 1854734 - PipeWire 0.2 should be required by xdg-desktop-portal 1866332 - Remove obsolete libusb-devel dependency 1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at "Started GNOME Display Manager" - GDM regression issue

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1848",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.3"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.2"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.15"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 11.0 earlier"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.15 earlier"
      },
      {
        "model": "ios",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "12.10.2 for windows earlier"
      },
      {
        "model": "macos catalina",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.15.1 earlier"
      },
      {
        "model": "macos high sierra",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.13.6 (security update 2019-006 not applied )"
      },
      {
        "model": "macos mojave",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.14.6 (security update 2019-001 not applied )"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.3 earlier"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.1 earlier"
      },
      {
        "model": "xcode",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "11.2 earlier"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_catalina",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_high_sierra",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_mojave",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:watchos",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:xcode",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2019-8819",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8819",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160254",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8819",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8819",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201910-1757",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160254",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8819",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0, iCloud for Windows 7.15. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * * information leak * * User impersonation * * Arbitrary code execution * * UI Spoofing * * Insufficient access restrictions * * Service operation interruption (DoS) * * Privilege escalation * * Memory corruption * * Authentication bypass. Apple Safari, etc. are all products of Apple (Apple). Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes prior to 12.10.2; Windows-based versions of iCloud prior to 7.15. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. STF then transmits the information to a\ncentralized, receiving Red Hat OpenShift Container Platform (OCP)\ndeployment for storage, retrieval, and monitoring. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-29-3 tvOS 13.2\n\ntvOS 13.2 is now available and addresses the following:\n\nAccounts\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at\nTechnische Universit\u00e4t Darmstadt\n\nApp Store\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: A local attacker may be able to login to the account of a\npreviously logged in user without valid credentials. \nCVE-2019-8803: Kiyeon An, \ucc28\ubbfc\uaddc (CHA Minkyu)\n\nAudio\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8785: Ian Beer of Google Project Zero\nCVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure\n\nAVEVideoEncoder\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure\n\nFile System Events\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8798: ABC Research s.r.o. working with Trend Micro\u0027s Zero\nDay Initiative\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to read restricted memory\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure\n\nKernel\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8782: Cheolung Lee of LINE+ Security Team\nCVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team\nCVE-2019-8808: found by OSS-Fuzz\nCVE-2019-8811: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8812: an anonymous researcher\nCVE-2019-8814: Cheolung Lee of LINE+ Security Team\nCVE-2019-8816: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8819: Cheolung Lee of LINE+ Security Team\nCVE-2019-8820: Samuel Gro\u00df of Google Project Zero\nCVE-2019-8821: Sergei Glazunov of Google Project Zero\nCVE-2019-8822: Sergei Glazunov of Google Project Zero\nCVE-2019-8823: Sergei Glazunov of Google Project Zero\n\nWebKit Process Model\nAvailable for: Apple TV 4K and Apple TV HD\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2019-8815: Apple\n\nAdditional recognition\n\nCFNetwork\nWe would like to acknowledge Lily Chen of Google for their\nassistance. \n\nKernel\nWe would like to acknowledge Jann Horn of Google Project Zero for\ntheir assistance. \n\nWebKit\nWe would like to acknowledge Dlive of Tencent\u0027s Xuanwu Lab and Zhiyi\nZhang of Codesafe Team of Legendsec at Qi\u0027anxin Group for their\nassistance. \n\nInstallation note:\n\nApple TV will periodically check for software updates. Alternatively,\nyou may manually check for software updates by selecting\n\"Settings -\u003e System -\u003e Software Update -\u003e Update Software.\"\n\nTo check the current version of software, select\n\"Settings -\u003e General -\u003e About.\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQJdBAEBCABHFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl24p5UpHHByb2R1Y3Qt\nc2VjdXJpdHktbm9yZXBseUBsaXN0cy5hcHBsZS5jb20ACgkQBz4uGe3y0M2DIQ/9\nFQmnN+1/tdXaFFI1PtdJ9hgXONcdsi+D05mREDTX7v0VaLzChX/N3DccI00Z1uT5\nVNKHRjInGYDZoO/UntzAWoZa+tcueaY23XhN9xTYrUlt1Ol1gIsaxTEgPtax4B9A\nPoqWb6S+oK1SHUxglGnlLtXkcyt3WHJ5iqan7BM9XX6dsriwgoBgKADpFi3FCXoa\ncFIvpoM6ZhxYyMPpxmMc1IRwgjDwOn2miyjkSaAONXw5R5YGRxSsjq+HkzYE3w1m\nm2NZElUB1nRmlyuU3aMsHUTxwAnfzryPiHRGUTcNZao39YBsyWz56sr3++g7qmnD\nuZZzBnISQpC6oJCWclw3UHcKHH+V0+1q059GHBoku6Xmkc5bPRnKdFgSf5OvyQUw\nXGjwL5UbGB5eTtdj/Kx5Rd/m5fFIUxVu7HB3bGQGhYHIc9iTdi9j3mCd3nOHCIEj\nRe07c084jl2Git4sH2Tva7tOqFyI2IyNVJ0LjBXO54fAC2mtFz3mkDFxCEzL5V92\nO/Wct2T6OpYghzkrOOlUEAQJTbwJjTZBWsUubcOoJo6P9JUPBDJKB0ibAaCWrt9I\n8OU5wRr3q0fTA3N/qdGGbQ/tgUwiMHGuqrHMv0XYGPfO5Qg5GuHpTYchZrP5nFwf\nziQuQtO92b1FA4sDI+ue1sDIG84tPrkTAeLmveBjezc=\n=KsmX\n-----END PGP SIGNATURE-----\n\n\n. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1207179 - Select items matching non existing pattern does not unselect already selected\n1566027 - can\u0027t correctly compute contents size if hidden files are included\n1569868 - Browsing samba shares using gvfs is very slow\n1652178 - [RFE] perf-tool run on wayland\n1656262 - The terminal\u0027s character display is unclear on rhel8 guest after installing gnome\n1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled\n1692536 - login screen shows after gnome-initial-setup\n1706008 - Sound Effect sometimes fails to change to selected option. \n1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. \n1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined\n1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly\n1758891 - tracker-devel subpackage missing from el8 repos\n1775345 - Rebase xdg-desktop-portal to 1.6\n1778579 - Nautilus does not respect umask settings. \n1779691 - Rebase xdg-desktop-portal-gtk to 1.6\n1794045 - There are two different high contrast versions of desktop icons\n1804719 - Update vte291 to 0.52.4\n1805929 - RHEL 8.1 gnome-shell-extension errors\n1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp\n1814820 - No checkbox to install updates in the shutdown dialog\n1816070 - \"search for an application to open this file\" dialog broken\n1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution\n1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1817143 - Rebase WebKitGTK to 2.28\n1820759 - Include IO stall fixes\n1820760 - Include IO fixes\n1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening\n1827030 - gnome-settings-daemon: subscription notification on CentOS Stream\n1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content\n1832347 - [Rebase] Rebase pipewire to 0.3.x\n1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install\n1837381 - Backport screen cast improvements to 8.3\n1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version\n1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6\n1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113\n1840080 - Can not control top bar menus via keys in Wayland\n1840788 - [flatpak][rhel8] unable to build potrace as dependency\n1843486 - Software crash after clicking Updates tab\n1844578 - anaconda very rarely crashes at startup with a pygobject traceback\n1846191 - usb adapters hotplug crashes gnome-shell\n1847051 - JS ERROR: TypeError: area is null\n1847061 - File search doesn\u0027t work under certain locales\n1847062 - gnome-remote-desktop crash on QXL graphics\n1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV\n1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow\n1854734 - PipeWire 0.2 should be required by xdg-desktop-portal\n1866332 - Remove obsolete libusb-devel dependency\n1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at \"Started GNOME Display Manager\" - GDM regression issue",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 2.52
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8819",
        "trust": 3.4
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96749516",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155069",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4012",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155216",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-14245",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-160254",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8819",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "id": "VAR-201912-1848",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:53:01.637000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "About the security content of iCloud for Windows 11.0",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210727"
      },
      {
        "title": "About the security content of iCloud for Windows 7.15",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210728"
      },
      {
        "title": "About the security content of iOS 13.2 and iPadOS 13.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210721"
      },
      {
        "title": "About the security content of Xcode 11.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210729"
      },
      {
        "title": "About the security content of macOS Catalina 10.15.1, Security Update 2019-001, and Security Update 2019-006",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210722"
      },
      {
        "title": "About the security content of tvOS 13.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210723"
      },
      {
        "title": "About the security content of watchOS 6.1",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210724"
      },
      {
        "title": "About the security content of Safari 13.0.3",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210725"
      },
      {
        "title": "About the security content of iTunes 12.10.2 for Windows",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210726"
      },
      {
        "title": "Mac \u306b\u642d\u8f09\u3055\u308c\u3066\u3044\u308b macOS \u3092\u8abf\u3079\u308b",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT201260"
      },
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=105603"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210721"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210723"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210725"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210726"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210727"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210728"
      },
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 1.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 1.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8786"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8787"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8794"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8798"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8797"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8803"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8795"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8785"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8788"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8803"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8815"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8766"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8735"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8789"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8804"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8816"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8775"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8793"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8805"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8710"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8819"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8782"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8794"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8807"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8743"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8820"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8783"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8795"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8811"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8747"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8821"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8784"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8797"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8812"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8750"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8822"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8785"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8798"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8813"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8764"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8823"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8786"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8802"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8814"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8765"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8787"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96749516/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8750"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8802"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8788"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8804"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8775"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8789"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8805"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8793"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8807"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8747"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8784"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://wpewebkit.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://webkitgtk.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210725"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210728"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155069/apple-security-advisory-2019-10-29-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155216/webkitgtk-wpe-webkit-code-execution-xss.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4012/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4233/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-30746"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.3,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/fulldisclosure/2019/oct/50"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/site/solutions/537113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "date": "2022-08-09T14:36:05",
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2019-11-01T17:11:43",
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2020-11-04T15:24:00",
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "date": "2019-10-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "date": "2019-11-01T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "date": "2019-12-18T18:15:44.413000",
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160254"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8819"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      },
      {
        "date": "2020-01-07T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "date": "2024-11-21T04:50:32.127000",
        "db": "NVD",
        "id": "CVE-2019-8819"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Updates to product vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1757"
      }
    ],
    "trust": 0.6
  }
}

VAR-202210-1530

Vulnerability from variot - Updated: 2025-12-22 22:51

A logic issue was addressed with improved state management. This issue is fixed in tvOS 16.1, macOS Ventura 13, watchOS 9.1, Safari 16.1, iOS 16.1 and iPadOS 16. Processing maliciously crafted web content may disclose sensitive user information. Both Apple macOS Big Sur and Apple macOS Monterey are products of Apple Inc. in the United States. Apple macOS Big Sur is the 17th major release of Apple's operating system macOS for the MAC. Apple macOS Monterey is the 18th major release of macOS, the desktop operating system for the Macintosh. Apple macOS Big Sur and macOS Monterey have security flaws.

Safari 16.1 may be obtained from the Mac App Store. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256


Debian Security Advisory DSA-5273-1 security@debian.org https://www.debian.org/security/ Alberto Garcia November 08, 2022 https://www.debian.org/security/faq


Package : webkit2gtk CVE ID : CVE-2022-42799 CVE-2022-42823 CVE-2022-42824

The following vulnerabilities have been discovered in the WebKitGTK web engine:

CVE-2022-42799

Jihwan Kim and Dohyun Lee discovered that visiting a malicious
website may lead to user interface spoofing.

For the stable distribution (bullseye), these problems have been fixed in version 2.38.2-1~deb11u1.

We recommend that you upgrade your webkit2gtk packages.

Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641 To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About". Alternatively, on your watch, select "My Watch > General > About". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2022-10-24-1 iOS 16.1 and iPadOS 16

iOS 16.1 and iPadOS 16 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213489.

AppleMobileFileIntegrity Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to modify protected parts of the file system Description: This issue was addressed by removing additional entitlements. CVE-2022-42825: Mickey Jin (@patch1t)

AVEVideoEncoder Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2022-32940: ABC Research s.r.o.

CFNetwork Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: Processing a maliciously crafted certificate may lead to arbitrary code execution Description: A certificate validation issue existed in the handling of WKWebView. CVE-2022-42813: Jonathan Zhang of Open Computing Facility (ocf.berkeley.edu)

Core Bluetooth Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to record audio using a pair of connected AirPods Description: This issue was addressed with improved entitlements. CVE-2022-32946: Guilherme Rambo of Best Buddy Apps (rambo.codes)

GPU Drivers Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32947: Asahi Lina (@LinaAsahi)

IOHIDFamily Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may cause unexpected app termination or arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2022-42820: Peter Pan ZhenPeng of STAR Labs

IOKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2022-42806: Tingting Yin of Tsinghua University

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-32924: Ian Beer of Google Project Zero

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: A remote user may be able to cause kernel code execution Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-42808: Zweig of Kunlun Lab

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. CVE-2022-42827: an anonymous researcher

ppp Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-42829: an anonymous researcher

ppp Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42830: an anonymous researcher

ppp Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2022-42831: an anonymous researcher CVE-2022-42832: an anonymous researcher

Sandbox Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: An app may be able to access user-sensitive data Description: An access issue was addressed with additional sandbox restrictions. CVE-2022-42811: Justin Bui (@slyd0g) of Snowflake

Shortcuts Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: A shortcut may be able to check the existence of an arbitrary path on the file system Description: A parsing issue in the handling of directory paths was addressed with improved path validation. CVE-2022-32938: Cristian Dinca of Tudor Vianu National High School of Computer Science of. Romania

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: Visiting a malicious website may lead to user interface spoofing Description: The issue was addressed with improved UI handling. WebKit Bugzilla: 243693 CVE-2022-42799: Jihwan Kim (@gPayl0ad), Dohyun Lee (@l33d0hyun)

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved memory handling. WebKit Bugzilla: 244622 CVE-2022-42823: Dohyun Lee (@l33d0hyun) of SSD Labs

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 245058 CVE-2022-42824: Abdulrahman Alqabandi of Microsoft Browser Vulnerability Research, Ryan Shin of IAAI SecLab at Korea University, Dohyun Lee (@l33d0hyun) of DNSLab at Korea University

WebKit PDF Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, iPad mini 5th generation and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 242781 CVE-2022-32922: Yonghwi Jin (@jinmo123) at Theori working with Trend Micro Zero Day Initiative

Additional recognition

iCloud We would like to acknowledge Tim Michaud (@TimGMichaud) of Moveworks.ai for their assistance.

Kernel We would like to acknowledge Peter Nguyen of STAR Labs, Tim Michaud (@TimGMichaud) of Moveworks.ai, Tommy Muir (@Muirey03) for their assistance.

WebKit We would like to acknowledge Maddie Stone of Google Project Zero, Narendra Bhati (@imnarendrabhati) of Suma Soft Pvt. Ltd., an anonymous researcher for their assistance.

All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIyBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmNW0WIACgkQ4RjMIDke NxmuNw/4m3JXuBK+obHVvyb4tGoeHKNZtJi/tHr0gDMtDjr5pIlXdl2wX99eLzoG D2Dj4YtMnUhqEgQVKVcnzxQuhmdHK21TmqgWi+kHNyg0plKX0mj+1222/qjtZOdf FgCHKsR0LVLDpgjthvA9WYqwbfOMmXvSS4sEHaeSIdo+8R68GcV9yJQ98hWsxqeh YPzZ8RqtkuzeeYVD8jaxVW6l7lQ37puQ3romivRe46Wi36nkYG6wifggWMSKmeNZ 9CVs/3GT294l9GnjuIHaM2WfnHzYSEQY/eqP34SQ96UPClpJF2afBCRd3eOl8ov1 hgyhjtfJCqqfb9uzXj0ciFrLFdn8xLxsY7L+RSOwtLz0zSTfwAkAEDnL7i5EBkwn 7a2l/r6bb/W7IOC67fQWZi33SkpGPJF51oT3PLOh1RyeRFE+NYd4hMMAIo8Bg4eZ 45aAh2L7ak1T6V4PnUuG+o51oQKKRH1b/MTamVyFWffT2uX8w+hrdDVifd/K/jmD auFkibGQBmO/VWe6f5lKsDQeq5RIax6OBs8LkZQ3EMIHi9De4s5WIlPakm4qYCLW QXQKlEi8p3BI4d5kckcXjdtwRp8QiJLinq9rZFzq5U5nQ2Z4KucHrMO0h5Frqisa KsmkMjSKuPPT5GTap9Z5BVJVSOADx0hTExUE1cGBESCtnmaXrw== =3Dgs -----END PGP SIGNATURE-----

. ========================================================================== Ubuntu Security Notice USN-5730-1 November 17, 2022

webkit2gtk vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 22.10
  • Ubuntu 22.04 LTS
  • Ubuntu 20.04 LTS

Summary:

Several security issues were fixed in WebKitGTK. If a user were tricked into viewing a malicious website, a remote attacker could exploit a variety of issues related to web browser security, including cross-site scripting attacks, denial of service attacks, and arbitrary code execution.

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 22.10: libjavascriptcoregtk-4.0-18 2.38.2-0ubuntu0.22.10.1 libjavascriptcoregtk-4.1-0 2.38.2-0ubuntu0.22.10.1 libjavascriptcoregtk-5.0-0 2.38.2-0ubuntu0.22.10.1 libwebkit2gtk-4.0-37 2.38.2-0ubuntu0.22.10.1 libwebkit2gtk-4.1-0 2.38.2-0ubuntu0.22.10.1 libwebkit2gtk-5.0-0 2.38.2-0ubuntu0.22.10.1

Ubuntu 22.04 LTS: libjavascriptcoregtk-4.0-18 2.38.2-0ubuntu0.22.04.2 libjavascriptcoregtk-4.1-0 2.38.2-0ubuntu0.22.04.2 libwebkit2gtk-4.0-37 2.38.2-0ubuntu0.22.04.2 libwebkit2gtk-4.1-0 2.38.2-0ubuntu0.22.04.2

Ubuntu 20.04 LTS: libjavascriptcoregtk-4.0-18 2.38.2-0ubuntu0.20.04.1 libwebkit2gtk-4.0-37 2.38.2-0ubuntu0.20.04.1

This update uses a new upstream release, which includes additional bug fixes. After a standard system update you need to restart any applications that use WebKitGTK, such as Epiphany, to make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202305-32


                                       https://security.gentoo.org/

Severity: High Title: WebKitGTK+: Multiple Vulnerabilities Date: May 30, 2023 Bugs: #871732, #879571, #888563, #905346, #905349, #905351 ID: 202305-32


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in arbitrary code execution.

Affected packages

Package Vulnerable Unaffected


net-libs/webkit-gtk < 2.40.1 >= 2.40.1

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebKitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.40.1"

References

[ 1 ] CVE-2022-32885 https://nvd.nist.gov/vuln/detail/CVE-2022-32885 [ 2 ] CVE-2022-32886 https://nvd.nist.gov/vuln/detail/CVE-2022-32886 [ 3 ] CVE-2022-32888 https://nvd.nist.gov/vuln/detail/CVE-2022-32888 [ 4 ] CVE-2022-32891 https://nvd.nist.gov/vuln/detail/CVE-2022-32891 [ 5 ] CVE-2022-32923 https://nvd.nist.gov/vuln/detail/CVE-2022-32923 [ 6 ] CVE-2022-42799 https://nvd.nist.gov/vuln/detail/CVE-2022-42799 [ 7 ] CVE-2022-42823 https://nvd.nist.gov/vuln/detail/CVE-2022-42823 [ 8 ] CVE-2022-42824 https://nvd.nist.gov/vuln/detail/CVE-2022-42824 [ 9 ] CVE-2022-42826 https://nvd.nist.gov/vuln/detail/CVE-2022-42826 [ 10 ] CVE-2022-42852 https://nvd.nist.gov/vuln/detail/CVE-2022-42852 [ 11 ] CVE-2022-42856 https://nvd.nist.gov/vuln/detail/CVE-2022-42856 [ 12 ] CVE-2022-42863 https://nvd.nist.gov/vuln/detail/CVE-2022-42863 [ 13 ] CVE-2022-42867 https://nvd.nist.gov/vuln/detail/CVE-2022-42867 [ 14 ] CVE-2022-46691 https://nvd.nist.gov/vuln/detail/CVE-2022-46691 [ 15 ] CVE-2022-46692 https://nvd.nist.gov/vuln/detail/CVE-2022-46692 [ 16 ] CVE-2022-46698 https://nvd.nist.gov/vuln/detail/CVE-2022-46698 [ 17 ] CVE-2022-46699 https://nvd.nist.gov/vuln/detail/CVE-2022-46699 [ 18 ] CVE-2022-46700 https://nvd.nist.gov/vuln/detail/CVE-2022-46700 [ 19 ] CVE-2023-23517 https://nvd.nist.gov/vuln/detail/CVE-2023-23517 [ 20 ] CVE-2023-23518 https://nvd.nist.gov/vuln/detail/CVE-2023-23518 [ 21 ] CVE-2023-23529 https://nvd.nist.gov/vuln/detail/CVE-2023-23529 [ 22 ] CVE-2023-25358 https://nvd.nist.gov/vuln/detail/CVE-2023-25358 [ 23 ] CVE-2023-25360 https://nvd.nist.gov/vuln/detail/CVE-2023-25360 [ 24 ] CVE-2023-25361 https://nvd.nist.gov/vuln/detail/CVE-2023-25361 [ 25 ] CVE-2023-25362 https://nvd.nist.gov/vuln/detail/CVE-2023-25362 [ 26 ] CVE-2023-25363 https://nvd.nist.gov/vuln/detail/CVE-2023-25363 [ 27 ] CVE-2023-27932 https://nvd.nist.gov/vuln/detail/CVE-2023-27932 [ 28 ] CVE-2023-27954 https://nvd.nist.gov/vuln/detail/CVE-2023-27954 [ 29 ] CVE-2023-28205 https://nvd.nist.gov/vuln/detail/CVE-2023-28205 [ 30 ] WSA-2022-0009 https://webkitgtk.org/security/WSA-2022-0009.html [ 31 ] WSA-2022-0010 https://webkitgtk.org/security/WSA-2022-0010.html [ 32 ] WSA-2023-0001 https://webkitgtk.org/security/WSA-2023-0001.html [ 33 ] WSA-2023-0002 https://webkitgtk.org/security/WSA-2023-0002.html [ 34 ] WSA-2023-0003 https://webkitgtk.org/security/WSA-2023-0003.html

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202305-32

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: webkit2gtk3 security and bug fix update Advisory ID: RHSA-2023:2256-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:2256 Issue date: 2023-05-09 CVE Names: CVE-2022-32886 CVE-2022-32888 CVE-2022-32923 CVE-2022-42799 CVE-2022-42823 CVE-2022-42824 CVE-2022-42826 CVE-2022-42852 CVE-2022-42863 CVE-2022-42867 CVE-2022-46691 CVE-2022-46692 CVE-2022-46698 CVE-2022-46699 CVE-2022-46700 CVE-2023-23517 CVE-2023-23518 CVE-2023-25358 CVE-2023-25360 CVE-2023-25361 CVE-2023-25362 CVE-2023-25363 ==================================================================== 1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 9.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.2 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 9):

Source: webkit2gtk3-2.38.5-1.el9.src.rpm

aarch64: webkit2gtk3-2.38.5-1.el9.aarch64.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.aarch64.rpm webkit2gtk3-debugsource-2.38.5-1.el9.aarch64.rpm webkit2gtk3-devel-2.38.5-1.el9.aarch64.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.aarch64.rpm

ppc64le: webkit2gtk3-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-debugsource-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-devel-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm

s390x: webkit2gtk3-2.38.5-1.el9.s390x.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.s390x.rpm webkit2gtk3-debugsource-2.38.5-1.el9.s390x.rpm webkit2gtk3-devel-2.38.5-1.el9.s390x.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.s390x.rpm

x86_64: webkit2gtk3-2.38.5-1.el9.i686.rpm webkit2gtk3-2.38.5-1.el9.x86_64.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-debuginfo-2.38.5-1.el9.x86_64.rpm webkit2gtk3-debugsource-2.38.5-1.el9.i686.rpm webkit2gtk3-debugsource-2.38.5-1.el9.x86_64.rpm webkit2gtk3-devel-2.38.5-1.el9.i686.rpm webkit2gtk3-devel-2.38.5-1.el9.x86_64.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-devel-2.38.5-1.el9.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-32886 https://access.redhat.com/security/cve/CVE-2022-32888 https://access.redhat.com/security/cve/CVE-2022-32923 https://access.redhat.com/security/cve/CVE-2022-42799 https://access.redhat.com/security/cve/CVE-2022-42823 https://access.redhat.com/security/cve/CVE-2022-42824 https://access.redhat.com/security/cve/CVE-2022-42826 https://access.redhat.com/security/cve/CVE-2022-42852 https://access.redhat.com/security/cve/CVE-2022-42863 https://access.redhat.com/security/cve/CVE-2022-42867 https://access.redhat.com/security/cve/CVE-2022-46691 https://access.redhat.com/security/cve/CVE-2022-46692 https://access.redhat.com/security/cve/CVE-2022-46698 https://access.redhat.com/security/cve/CVE-2022-46699 https://access.redhat.com/security/cve/CVE-2022-46700 https://access.redhat.com/security/cve/CVE-2023-23517 https://access.redhat.com/security/cve/CVE-2023-23518 https://access.redhat.com/security/cve/CVE-2023-25358 https://access.redhat.com/security/cve/CVE-2023-25360 https://access.redhat.com/security/cve/CVE-2023-25361 https://access.redhat.com/security/cve/CVE-2023-25362 https://access.redhat.com/security/cve/CVE-2023-25363 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2023 Red Hat, Inc

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202210-1530",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "37"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "9.1"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.0"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.1"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.1"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "169607"
      },
      {
        "db": "PACKETSTORM",
        "id": "169554"
      },
      {
        "db": "PACKETSTORM",
        "id": "169555"
      },
      {
        "db": "PACKETSTORM",
        "id": "169550"
      }
    ],
    "trust": 0.4
  },
  "cve": "CVE-2022-42824",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2022-42824",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-42824",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2022-42824",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202210-1674",
            "trust": 0.6,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A logic issue was addressed with improved state management. This issue is fixed in tvOS 16.1, macOS Ventura 13, watchOS 9.1, Safari 16.1, iOS 16.1 and iPadOS 16. Processing maliciously crafted web content may disclose sensitive user information. Both Apple macOS Big Sur and Apple macOS Monterey are products of Apple Inc. in the United States. Apple macOS Big Sur is the 17th major release of Apple\u0027s operating system macOS for the MAC. Apple macOS Monterey is the 18th major release of macOS, the desktop operating system for the Macintosh. Apple macOS Big Sur and macOS Monterey have security flaws. \n\nSafari 16.1 may be obtained from the Mac App Store. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5273-1                   security@debian.org\nhttps://www.debian.org/security/                           Alberto Garcia\nNovember 08, 2022                     https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : webkit2gtk\nCVE ID         : CVE-2022-42799 CVE-2022-42823 CVE-2022-42824\n\nThe following vulnerabilities have been discovered in the WebKitGTK\nweb engine:\n\nCVE-2022-42799\n\n    Jihwan Kim and Dohyun Lee discovered that visiting a malicious\n    website may lead to user interface spoofing. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.38.2-1~deb11u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nInstructions on how to update your Apple Watch software are available\nat https://support.apple.com/kb/HT204641  To check the version on\nyour Apple Watch, open the Apple Watch app on your iPhone and select\n\"My Watch \u003e General \u003e About\".  Alternatively, on your watch, select\n\"My Watch \u003e General \u003e About\". -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-10-24-1 iOS 16.1 and iPadOS 16\n\niOS 16.1 and iPadOS 16 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213489. \n\nAppleMobileFileIntegrity\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to modify protected parts of the file\nsystem\nDescription: This issue was addressed by removing additional\nentitlements. \nCVE-2022-42825: Mickey Jin (@patch1t)\n\nAVEVideoEncoder\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-32940: ABC Research s.r.o. \n\nCFNetwork\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: Processing a maliciously crafted certificate may lead to\narbitrary code execution\nDescription: A certificate validation issue existed in the handling\nof WKWebView. \nCVE-2022-42813: Jonathan Zhang of Open Computing Facility\n(ocf.berkeley.edu)\n\nCore Bluetooth\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to record audio using a pair of connected\nAirPods\nDescription: This issue was addressed with improved entitlements. \nCVE-2022-32946: Guilherme Rambo of Best Buddy Apps (rambo.codes)\n\nGPU Drivers\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32947: Asahi Lina (@LinaAsahi)\n\nIOHIDFamily\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may cause unexpected app termination or arbitrary code\nexecution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-42820: Peter Pan ZhenPeng of STAR Labs\n\nIOKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved locking. \nCVE-2022-42806: Tingting Yin of Tsinghua University\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-32924: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: A remote user may be able to cause kernel code execution\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-42808: Zweig of Kunlun Lab\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges. Apple is aware of a report that this issue may\nhave been actively exploited. \nCVE-2022-42827: an anonymous researcher\n\nppp\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-42829: an anonymous researcher\n\nppp\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42830: an anonymous researcher\n\nppp\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2022-42831: an anonymous researcher\nCVE-2022-42832: an anonymous researcher\n\nSandbox\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: An app may be able to access user-sensitive data\nDescription: An access issue was addressed with additional sandbox\nrestrictions. \nCVE-2022-42811: Justin Bui (@slyd0g) of Snowflake\n\nShortcuts\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: A shortcut may be able to check the existence of an arbitrary\npath on the file system\nDescription: A parsing issue in the handling of directory paths was\naddressed with improved path validation. \nCVE-2022-32938: Cristian Dinca of Tudor Vianu National High School of\nComputer Science of. Romania\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: Visiting a malicious website may lead to user interface\nspoofing\nDescription: The issue was addressed with improved UI handling. \nWebKit Bugzilla: 243693\nCVE-2022-42799: Jihwan Kim (@gPayl0ad), Dohyun Lee (@l33d0hyun)\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 244622\nCVE-2022-42823: Dohyun Lee (@l33d0hyun) of SSD Labs\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 245058\nCVE-2022-42824: Abdulrahman Alqabandi of Microsoft Browser\nVulnerability Research, Ryan Shin of IAAI SecLab at Korea University,\nDohyun Lee (@l33d0hyun) of DNSLab at Korea University\n\nWebKit PDF\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, iPad mini\n5th generation and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 242781\nCVE-2022-32922: Yonghwi Jin (@jinmo123) at Theori working with Trend\nMicro Zero Day Initiative\n\nAdditional recognition\n\niCloud\nWe would like to acknowledge Tim Michaud (@TimGMichaud) of\nMoveworks.ai for their assistance. \n\nKernel\nWe would like to acknowledge Peter Nguyen of STAR Labs, Tim Michaud\n(@TimGMichaud) of Moveworks.ai, Tommy Muir (@Muirey03) for their\nassistance. \n\nWebKit\nWe would like to acknowledge Maddie Stone of Google Project Zero,\nNarendra Bhati (@imnarendrabhati) of Suma Soft Pvt. Ltd., an\nanonymous researcher for their assistance. \n\n\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIyBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmNW0WIACgkQ4RjMIDke\nNxmuNw/4m3JXuBK+obHVvyb4tGoeHKNZtJi/tHr0gDMtDjr5pIlXdl2wX99eLzoG\nD2Dj4YtMnUhqEgQVKVcnzxQuhmdHK21TmqgWi+kHNyg0plKX0mj+1222/qjtZOdf\nFgCHKsR0LVLDpgjthvA9WYqwbfOMmXvSS4sEHaeSIdo+8R68GcV9yJQ98hWsxqeh\nYPzZ8RqtkuzeeYVD8jaxVW6l7lQ37puQ3romivRe46Wi36nkYG6wifggWMSKmeNZ\n9CVs/3GT294l9GnjuIHaM2WfnHzYSEQY/eqP34SQ96UPClpJF2afBCRd3eOl8ov1\nhgyhjtfJCqqfb9uzXj0ciFrLFdn8xLxsY7L+RSOwtLz0zSTfwAkAEDnL7i5EBkwn\n7a2l/r6bb/W7IOC67fQWZi33SkpGPJF51oT3PLOh1RyeRFE+NYd4hMMAIo8Bg4eZ\n45aAh2L7ak1T6V4PnUuG+o51oQKKRH1b/MTamVyFWffT2uX8w+hrdDVifd/K/jmD\nauFkibGQBmO/VWe6f5lKsDQeq5RIax6OBs8LkZQ3EMIHi9De4s5WIlPakm4qYCLW\nQXQKlEi8p3BI4d5kckcXjdtwRp8QiJLinq9rZFzq5U5nQ2Z4KucHrMO0h5Frqisa\nKsmkMjSKuPPT5GTap9Z5BVJVSOADx0hTExUE1cGBESCtnmaXrw==\n=3Dgs\n-----END PGP SIGNATURE-----\n\n\n. ==========================================================================\nUbuntu Security Notice USN-5730-1\nNovember 17, 2022\n\nwebkit2gtk vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.10\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in WebKitGTK. If a user were tricked into viewing a malicious website, a remote\nattacker could exploit a variety of issues related to web browser security,\nincluding cross-site scripting attacks, denial of service attacks, and\narbitrary code execution. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.10:\n   libjavascriptcoregtk-4.0-18     2.38.2-0ubuntu0.22.10.1\n   libjavascriptcoregtk-4.1-0      2.38.2-0ubuntu0.22.10.1\n   libjavascriptcoregtk-5.0-0      2.38.2-0ubuntu0.22.10.1\n   libwebkit2gtk-4.0-37            2.38.2-0ubuntu0.22.10.1\n   libwebkit2gtk-4.1-0             2.38.2-0ubuntu0.22.10.1\n   libwebkit2gtk-5.0-0             2.38.2-0ubuntu0.22.10.1\n\nUbuntu 22.04 LTS:\n   libjavascriptcoregtk-4.0-18     2.38.2-0ubuntu0.22.04.2\n   libjavascriptcoregtk-4.1-0      2.38.2-0ubuntu0.22.04.2\n   libwebkit2gtk-4.0-37            2.38.2-0ubuntu0.22.04.2\n   libwebkit2gtk-4.1-0             2.38.2-0ubuntu0.22.04.2\n\nUbuntu 20.04 LTS:\n   libjavascriptcoregtk-4.0-18     2.38.2-0ubuntu0.20.04.1\n   libwebkit2gtk-4.0-37            2.38.2-0ubuntu0.20.04.1\n\nThis update uses a new upstream release, which includes additional bug\nfixes. After a standard system update you need to restart any applications\nthat use WebKitGTK, such as Epiphany, to make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202305-32\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebKitGTK+: Multiple Vulnerabilities\n     Date: May 30, 2023\n     Bugs: #871732, #879571, #888563, #905346, #905349, #905351\n       ID: 202305-32\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in arbitrary code execution. \n\nAffected packages\n================\nPackage              Vulnerable    Unaffected\n-------------------  ------------  ------------\nnet-libs/webkit-gtk  \u003c 2.40.1      \u003e= 2.40.1\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebKitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.40.1\"\n\nReferences\n=========\n[ 1 ] CVE-2022-32885\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32885\n[ 2 ] CVE-2022-32886\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32886\n[ 3 ] CVE-2022-32888\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32888\n[ 4 ] CVE-2022-32891\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32891\n[ 5 ] CVE-2022-32923\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32923\n[ 6 ] CVE-2022-42799\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42799\n[ 7 ] CVE-2022-42823\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42823\n[ 8 ] CVE-2022-42824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42824\n[ 9 ] CVE-2022-42826\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42826\n[ 10 ] CVE-2022-42852\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42852\n[ 11 ] CVE-2022-42856\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42856\n[ 12 ] CVE-2022-42863\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42863\n[ 13 ] CVE-2022-42867\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42867\n[ 14 ] CVE-2022-46691\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46691\n[ 15 ] CVE-2022-46692\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46692\n[ 16 ] CVE-2022-46698\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46698\n[ 17 ] CVE-2022-46699\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46699\n[ 18 ] CVE-2022-46700\n      https://nvd.nist.gov/vuln/detail/CVE-2022-46700\n[ 19 ] CVE-2023-23517\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23517\n[ 20 ] CVE-2023-23518\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23518\n[ 21 ] CVE-2023-23529\n      https://nvd.nist.gov/vuln/detail/CVE-2023-23529\n[ 22 ] CVE-2023-25358\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25358\n[ 23 ] CVE-2023-25360\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25360\n[ 24 ] CVE-2023-25361\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25361\n[ 25 ] CVE-2023-25362\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25362\n[ 26 ] CVE-2023-25363\n      https://nvd.nist.gov/vuln/detail/CVE-2023-25363\n[ 27 ] CVE-2023-27932\n      https://nvd.nist.gov/vuln/detail/CVE-2023-27932\n[ 28 ] CVE-2023-27954\n      https://nvd.nist.gov/vuln/detail/CVE-2023-27954\n[ 29 ] CVE-2023-28205\n      https://nvd.nist.gov/vuln/detail/CVE-2023-28205\n[ 30 ] WSA-2022-0009\n      https://webkitgtk.org/security/WSA-2022-0009.html\n[ 31 ] WSA-2022-0010\n      https://webkitgtk.org/security/WSA-2022-0010.html\n[ 32 ] WSA-2023-0001\n      https://webkitgtk.org/security/WSA-2023-0001.html\n[ 33 ] WSA-2023-0002\n      https://webkitgtk.org/security/WSA-2023-0002.html\n[ 34 ] WSA-2023-0003\n      https://webkitgtk.org/security/WSA-2023-0003.html\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-32\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: webkit2gtk3 security and bug fix update\nAdvisory ID:       RHSA-2023:2256-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2023:2256\nIssue date:        2023-05-09\nCVE Names:         CVE-2022-32886 CVE-2022-32888 CVE-2022-32923\n                   CVE-2022-42799 CVE-2022-42823 CVE-2022-42824\n                   CVE-2022-42826 CVE-2022-42852 CVE-2022-42863\n                   CVE-2022-42867 CVE-2022-46691 CVE-2022-46692\n                   CVE-2022-46698 CVE-2022-46699 CVE-2022-46700\n                   CVE-2023-23517 CVE-2023-23518 CVE-2023-25358\n                   CVE-2023-25360 CVE-2023-25361 CVE-2023-25362\n                   CVE-2023-25363\n====================================================================\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.2 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\nSource:\nwebkit2gtk3-2.38.5-1.el9.src.rpm\n\naarch64:\nwebkit2gtk3-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-devel-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el9.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32886\nhttps://access.redhat.com/security/cve/CVE-2022-32888\nhttps://access.redhat.com/security/cve/CVE-2022-32923\nhttps://access.redhat.com/security/cve/CVE-2022-42799\nhttps://access.redhat.com/security/cve/CVE-2022-42823\nhttps://access.redhat.com/security/cve/CVE-2022-42824\nhttps://access.redhat.com/security/cve/CVE-2022-42826\nhttps://access.redhat.com/security/cve/CVE-2022-42852\nhttps://access.redhat.com/security/cve/CVE-2022-42863\nhttps://access.redhat.com/security/cve/CVE-2022-42867\nhttps://access.redhat.com/security/cve/CVE-2022-46691\nhttps://access.redhat.com/security/cve/CVE-2022-46692\nhttps://access.redhat.com/security/cve/CVE-2022-46698\nhttps://access.redhat.com/security/cve/CVE-2022-46699\nhttps://access.redhat.com/security/cve/CVE-2022-46700\nhttps://access.redhat.com/security/cve/CVE-2023-23517\nhttps://access.redhat.com/security/cve/CVE-2023-23518\nhttps://access.redhat.com/security/cve/CVE-2023-25358\nhttps://access.redhat.com/security/cve/CVE-2023-25360\nhttps://access.redhat.com/security/cve/CVE-2023-25361\nhttps://access.redhat.com/security/cve/CVE-2023-25362\nhttps://access.redhat.com/security/cve/CVE-2023-25363\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      },
      {
        "db": "VULHUB",
        "id": "VHN-429655"
      },
      {
        "db": "PACKETSTORM",
        "id": "169607"
      },
      {
        "db": "PACKETSTORM",
        "id": "169794"
      },
      {
        "db": "PACKETSTORM",
        "id": "169554"
      },
      {
        "db": "PACKETSTORM",
        "id": "169555"
      },
      {
        "db": "PACKETSTORM",
        "id": "169550"
      },
      {
        "db": "PACKETSTORM",
        "id": "169932"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      }
    ],
    "trust": 1.71
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-429655",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429655"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-42824",
        "trust": 2.5
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2022/11/04/4",
        "trust": 1.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169607",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "169932",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "169795",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6029",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6248",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5789",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6137",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5305.2",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "169550",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "169794",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "169554",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "169555",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "169556",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-429655",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172625",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172241",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429655"
      },
      {
        "db": "PACKETSTORM",
        "id": "169607"
      },
      {
        "db": "PACKETSTORM",
        "id": "169794"
      },
      {
        "db": "PACKETSTORM",
        "id": "169554"
      },
      {
        "db": "PACKETSTORM",
        "id": "169555"
      },
      {
        "db": "PACKETSTORM",
        "id": "169550"
      },
      {
        "db": "PACKETSTORM",
        "id": "169932"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "id": "VAR-202210-1530",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429655"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:51:15.937000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Apple macOS Big Sur  and macOS Monterey Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=212499"
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://support.apple.com/en-us/ht213495"
      },
      {
        "trust": 1.7,
        "url": "https://www.debian.org/security/2022/dsa-5273"
      },
      {
        "trust": 1.7,
        "url": "https://www.debian.org/security/2022/dsa-5274"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213488"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213489"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213491"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/en-us/ht213492"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2022/11/msg00010.html"
      },
      {
        "trust": 1.7,
        "url": "http://www.openwall.com/lists/oss-security/2022/11/04/4"
      },
      {
        "trust": 1.7,
        "url": "https://security.gentoo.org/glsa/202305-32"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5lf4lyp725xz7rwopfuv6dgpn4q5duu4/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/jofkx6buejfecsvfv6p5inqcoyqbb4nz/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/aqklegjk3lhakuqolbhnr2di3iugllty/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42824"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5lf4lyp725xz7rwopfuv6dgpn4q5duu4/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/aqklegjk3lhakuqolbhnr2di3iugllty/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/jofkx6buejfecsvfv6p5inqcoyqbb4nz/"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42799"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42823"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5305.2"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-wpe-webkit-five-vulnerabilities-39866"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-39701"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-42824/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169932/ubuntu-security-notice-usn-5730-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169795/debian-security-advisory-5274-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169607/apple-security-advisory-2022-10-27-15.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6137"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6248"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6029"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5789"
      },
      {
        "trust": 0.4,
        "url": "https://support.apple.com/en-us/ht201222."
      },
      {
        "trust": 0.4,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32923"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42808"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32924"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42811"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32940"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42813"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32888"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32922"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32947"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42825"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46698"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46692"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46691"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42826"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46699"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213495."
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213491."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213492."
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213489."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42806"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32938"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32946"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42820"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/webkit2gtk/2.38.2-0ubuntu0.22.10.1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/webkit2gtk/2.38.2-0ubuntu0.20.04.1"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-5730-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/webkit2gtk/2.38.2-0ubuntu0.22.04.2"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32891"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0010.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0001.html"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23517"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2022-0009.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2023-0003.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32885"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27932"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46700"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-27954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25361"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25360"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42856"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-25362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-28205"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-23517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-23518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46692"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:2256"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46699"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25360"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.2_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42867"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42863"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46700"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42826"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429655"
      },
      {
        "db": "PACKETSTORM",
        "id": "169607"
      },
      {
        "db": "PACKETSTORM",
        "id": "169794"
      },
      {
        "db": "PACKETSTORM",
        "id": "169554"
      },
      {
        "db": "PACKETSTORM",
        "id": "169555"
      },
      {
        "db": "PACKETSTORM",
        "id": "169550"
      },
      {
        "db": "PACKETSTORM",
        "id": "169932"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-429655"
      },
      {
        "db": "PACKETSTORM",
        "id": "169607"
      },
      {
        "db": "PACKETSTORM",
        "id": "169794"
      },
      {
        "db": "PACKETSTORM",
        "id": "169554"
      },
      {
        "db": "PACKETSTORM",
        "id": "169555"
      },
      {
        "db": "PACKETSTORM",
        "id": "169550"
      },
      {
        "db": "PACKETSTORM",
        "id": "169932"
      },
      {
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-11-01T00:00:00",
        "db": "VULHUB",
        "id": "VHN-429655"
      },
      {
        "date": "2022-10-31T15:10:32",
        "db": "PACKETSTORM",
        "id": "169607"
      },
      {
        "date": "2022-11-09T13:38:05",
        "db": "PACKETSTORM",
        "id": "169794"
      },
      {
        "date": "2022-10-31T14:19:52",
        "db": "PACKETSTORM",
        "id": "169554"
      },
      {
        "date": "2022-10-31T14:20:08",
        "db": "PACKETSTORM",
        "id": "169555"
      },
      {
        "date": "2022-10-31T14:18:24",
        "db": "PACKETSTORM",
        "id": "169550"
      },
      {
        "date": "2022-11-18T14:26:50",
        "db": "PACKETSTORM",
        "id": "169932"
      },
      {
        "date": "2023-05-30T16:32:33",
        "db": "PACKETSTORM",
        "id": "172625"
      },
      {
        "date": "2023-05-09T15:24:16",
        "db": "PACKETSTORM",
        "id": "172241"
      },
      {
        "date": "2022-10-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      },
      {
        "date": "2022-11-01T20:15:24.167000",
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-12-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-429655"
      },
      {
        "date": "2023-05-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      },
      {
        "date": "2025-04-21T16:15:51.440000",
        "db": "NVD",
        "id": "CVE-2022-42824"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple macOS Big Sur and macOS Monterey Security hole",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1674"
      }
    ],
    "trust": 0.6
  }
}

VAR-202212-1523

Vulnerability from variot - Updated: 2025-12-22 22:49

The issue was addressed with improved memory handling. This issue is fixed in Safari 16.2, tvOS 16.2, macOS Ventura 13.1, iOS 15.7.2 and iPadOS 15.7.2, iOS 16.2 and iPadOS 16.2, watchOS 9.2. Processing maliciously crafted web content may result in the disclosure of process memory. Safari , iPadOS , iOS Unspecified vulnerabilities exist in multiple Apple products.Information may be obtained.

For the stable distribution (bullseye), these problems have been fixed in version 2.38.3-1~deb11u1.

We recommend that you upgrade your webkit2gtk packages.

Safari 16.2 may be obtained from the Mac App Store. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2022-12-13-1 iOS 16.2 and iPadOS 16.2

iOS 16.2 and iPadOS 16.2 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213530.

Accounts Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: A user may be able to view sensitive user information Description: This issue was addressed with improved data protection. CVE-2022-42843: Mickey Jin (@patch1t)

AppleAVD Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Parsing a maliciously crafted video file may lead to kernel code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46694: Andrey Labunets and Nikita Tarakanov

AppleMobileFileIntegrity Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by enabling hardened runtime. CVE-2022-42865: Wojciech Reguła (@_r3ggi) of SecuRing

AVEVideoEncoder Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved checks. CVE-2022-42848: ABC Research s.r.o

CoreServices Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to bypass Privacy preferences Description: Multiple issues were addressed by removing the vulnerable code. CVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of Offensive Security

GPU Drivers Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to disclose kernel memory Description: The issue was addressed with improved memory handling. CVE-2022-46702: Xia0o0o0o of W4terDr0p, Sun Yat-sen University

Graphics Driver Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42850: Willy R. Vasquez of The University of Texas at Austin

Graphics Driver Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Parsing a maliciously crafted video file may lead to unexpected system termination Description: The issue was addressed with improved memory handling. CVE-2022-42846: Willy R. Vasquez of The University of Texas at Austin

ImageIO Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46693: Mickey Jin (@patch1t)

ImageIO Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Parsing a maliciously crafted TIFF file may lead to disclosure of user information Description: The issue was addressed with improved memory handling. CVE-2022-42851: Mickey Jin (@patch1t)

IOHIDFamily Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2022-42864: Tommy Muir (@Muirey03)

IOMobileFrameBuffer Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved input validation. CVE-2022-46690: John Aakerblom (@jaakerblom)

iTunes Store Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: A remote user may be able to cause unexpected app termination or arbitrary code execution Description: An issue existed in the parsing of URLs. CVE-2022-42837: Weijia Dai (@dwj1210) of Momo Security

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with additional validation. CVE-2022-46689: Ian Beer of Google Project Zero

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Connecting to a malicious NFS server may lead to arbitrary code execution with kernel privileges Description: The issue was addressed with improved bounds checks. CVE-2022-46701: Felix Poulin-Belanger

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: A remote user may be able to cause kernel code execution Description: The issue was addressed with improved memory handling. CVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year Lab

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to break out of its sandbox Description: This issue was addressed with improved checks. CVE-2022-42861: pattern-f (@pattern_F_) of Ant Security Light-Year Lab

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to break out of its sandbox Description: The issue was addressed with improved memory handling. CVE-2022-42844: pattern-f (@pattern_F_) of Ant Security Light-Year Lab

Kernel Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app with root privileges may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42845: Adam Doupé of ASU SEFCOM

Photos Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Shake-to-undo may allow a deleted photo to be re-surfaced without authentication Description: The issue was addressed with improved bounds checks. CVE-2022-32943: an anonymous researcher

ppp Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to execute arbitrary code with kernel privileges Description: The issue was addressed with improved memory handling. CVE-2022-42840: an anonymous researcher

Preferences Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to use arbitrary entitlements Description: A logic issue was addressed with improved state management. CVE-2022-42855: Ivan Fratric of Google Project Zero

Printing Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to bypass Privacy preferences Description: This issue was addressed by removing the vulnerable code. CVE-2022-42862: Mickey Jin (@patch1t)

Safari Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Visiting a website that frames malicious content may lead to UI spoofing Description: A spoofing issue existed in the handling of URLs. CVE-2022-46695: KirtiKumar Anandrao Ramchandani

Software Update Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: A user may be able to elevate privileges Description: An access issue existed with privileged API calls. CVE-2022-42849: Mickey Jin (@patch1t)

Weather Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: An app may be able to read sensitive location information Description: The issue was addressed with improved handling of caches. CVE-2022-42866: an anonymous researcher

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 245521 CVE-2022-42867: Maddie Stone of Google Project Zero

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory consumption issue was addressed with improved memory handling. WebKit Bugzilla: 245466 CVE-2022-46691: an anonymous researcher

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may bypass Same Origin Policy Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 246783 CVE-2022-46692: KirtiKumar Anandrao Ramchandani

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may result in the disclosure of process memory Description: The issue was addressed with improved memory handling. CVE-2022-42852: hazbinhotel working with Trend Micro Zero Day Initiative

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. WebKit Bugzilla: 246942 CVE-2022-46696: Samuel Groß of Google V8 Security WebKit Bugzilla: 247562 CVE-2022-46700: Samuel Groß of Google V8 Security

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A logic issue was addressed with improved checks. CVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs & DNSLab, Korea Univ.

WebKit Available for: iPhone 8 and later, iPad Pro (all models), iPad Air 3rd generation and later, iPad 5th generation and later, and iPad mini 5th generation and later Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 247420 CVE-2022-46699: Samuel Groß of Google V8 Security WebKit Bugzilla: 244622 CVE-2022-42863: an anonymous researcher

Additional recognition

Kernel We would like to acknowledge Zweig of Kunlun Lab and pattern-f (@pattern_F_) of Ant Security Light-Year Lab for their assistance.

Safari Extensions We would like to acknowledge Oliver Dunk and Christian R. of 1Password for their assistance.

WebKit We would like to acknowledge an anonymous researcher and scarlet for their assistance.

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/ iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device. To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About. The version after applying this update will be "iOS 16.2 and iPadOS 16.2". All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFYIACgkQ4RjMIDke NxmnGxAAuO8CwNDPYn+yyq7VIRB6+sCaV962MC2c7TFzU+R1waBBaG57NmJQeANL 5645QVNL/5k0TaFiSV/dFg15YvBkLgC6JVHeKebYvboSHB0wsfxejfwYnYFLNc44 xriqdQSarmp61Aw4270LFRjV1XokOGuEzo4U3InyhiawceAVR/qvteNDxETnsiAN jyaPRnx1d2K22+VZAFRtVjLgFLkBwAguamuMe8vIaqP71giORZNfX+w+GduszAqY Spo5aPC8G56GmHGZnSid/utBa2CqXfVRMyUmnYYukPrYACdy4WkTKWOWCbPrCbkT VWc43jsWhPqsMFopyy4K+SaCZqNdcpIag8n5uvCsf0tLPIUPTMRXSj3w1WOq1++6 NV3cZp6RaFVYJ3twU2D1q/Vj8VXILNPvECzGNlOxRJu0H7w3VD5ZdZ16Gsc8T62C ER5CSJr49fj5ixLCP9HepmIK5Rz0UXMxmylUANk2h88HK9hr7/s1EOYtCVjTEYQ2 c1ESd87HYz24iYnqaL5d8KDFiGQCVfuwnz2keWO7Lc2E+8OUqqYTWfCoSWprWnSU ZR31xA+JnPSS4WBP4RzitUmAiP5sdulF0jg4IZ0y0RUwWI4lD4vv0z9glb++rRHh PJj9uuirFcmKJ1BSfEN1glq/gGW8SP1vuq1BU+fIdeHtw4XeeIw= =qV7H -----END PGP SIGNATURE-----

. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: webkit2gtk3 security and bug fix update Advisory ID: RHSA-2023:2834-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:2834 Issue date: 2023-05-16 CVE Names: CVE-2022-32886 CVE-2022-32888 CVE-2022-32923 CVE-2022-42799 CVE-2022-42823 CVE-2022-42824 CVE-2022-42826 CVE-2022-42852 CVE-2022-42863 CVE-2022-42867 CVE-2022-46691 CVE-2022-46692 CVE-2022-46698 CVE-2022-46699 CVE-2022-46700 CVE-2023-23517 CVE-2023-23518 CVE-2023-25358 CVE-2023-25360 CVE-2023-25361 CVE-2023-25362 CVE-2023-25363 ==================================================================== 1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.8 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.38.5-1.el8.src.rpm

aarch64: webkit2gtk3-2.38.5-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.38.5-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.38.5-1.el8.aarch64.rpm webkit2gtk3-devel-2.38.5-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el8.aarch64.rpm webkit2gtk3-jsc-2.38.5-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.38.5-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.aarch64.rpm

ppc64le: webkit2gtk3-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-devel-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.38.5-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.ppc64le.rpm

s390x: webkit2gtk3-2.38.5-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.38.5-1.el8.s390x.rpm webkit2gtk3-debugsource-2.38.5-1.el8.s390x.rpm webkit2gtk3-devel-2.38.5-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el8.s390x.rpm webkit2gtk3-jsc-2.38.5-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.38.5-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.s390x.rpm

x86_64: webkit2gtk3-2.38.5-1.el8.i686.rpm webkit2gtk3-2.38.5-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.38.5-1.el8.i686.rpm webkit2gtk3-debuginfo-2.38.5-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.38.5-1.el8.i686.rpm webkit2gtk3-debugsource-2.38.5-1.el8.x86_64.rpm webkit2gtk3-devel-2.38.5-1.el8.i686.rpm webkit2gtk3-devel-2.38.5-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.38.5-1.el8.x86_64.rpm webkit2gtk3-jsc-2.38.5-1.el8.i686.rpm webkit2gtk3-jsc-2.38.5-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.38.5-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.38.5-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.38.5-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-32886 https://access.redhat.com/security/cve/CVE-2022-32888 https://access.redhat.com/security/cve/CVE-2022-32923 https://access.redhat.com/security/cve/CVE-2022-42799 https://access.redhat.com/security/cve/CVE-2022-42823 https://access.redhat.com/security/cve/CVE-2022-42824 https://access.redhat.com/security/cve/CVE-2022-42826 https://access.redhat.com/security/cve/CVE-2022-42852 https://access.redhat.com/security/cve/CVE-2022-42863 https://access.redhat.com/security/cve/CVE-2022-42867 https://access.redhat.com/security/cve/CVE-2022-46691 https://access.redhat.com/security/cve/CVE-2022-46692 https://access.redhat.com/security/cve/CVE-2022-46698 https://access.redhat.com/security/cve/CVE-2022-46699 https://access.redhat.com/security/cve/CVE-2022-46700 https://access.redhat.com/security/cve/CVE-2023-23517 https://access.redhat.com/security/cve/CVE-2023-23518 https://access.redhat.com/security/cve/CVE-2023-25358 https://access.redhat.com/security/cve/CVE-2023-25360 https://access.redhat.com/security/cve/CVE-2023-25361 https://access.redhat.com/security/cve/CVE-2023-25362 https://access.redhat.com/security/cve/CVE-2023-25363 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.8_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2023 Red Hat, Inc

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202212-1523",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.7.2"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "ipados",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "iphone os",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.7.2"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "9.2"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "macos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0"
      },
      {
        "model": "tvos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": "9.2"
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "170319"
      },
      {
        "db": "PACKETSTORM",
        "id": "170317"
      },
      {
        "db": "PACKETSTORM",
        "id": "170311"
      }
    ],
    "trust": 0.3
  },
  "cve": "CVE-2022-42852",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-42852",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.5,
            "baseSeverity": "Medium",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-42852",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-42852",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2022-42852",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-42852",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202212-3045",
            "trust": 0.6,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The issue was addressed with improved memory handling. This issue is fixed in Safari 16.2, tvOS 16.2, macOS Ventura 13.1, iOS 15.7.2 and iPadOS 15.7.2, iOS 16.2 and iPadOS 16.2, watchOS 9.2. Processing maliciously crafted web content may result in the disclosure of process memory. Safari , iPadOS , iOS Unspecified vulnerabilities exist in multiple Apple products.Information may be obtained. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.38.3-1~deb11u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nSafari 16.2 may be obtained from the Mac App Store. Apple is aware of a report that this issue\nmay have been actively exploited against versions of iOS released\nbefore iOS 15.1. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-12-13-1 iOS 16.2 and iPadOS 16.2\n\niOS 16.2 and iPadOS 16.2 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213530. \n\nAccounts\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: A user may be able to view sensitive user information\nDescription: This issue was addressed with improved data protection. \nCVE-2022-42843: Mickey Jin (@patch1t)\n\nAppleAVD\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Parsing a maliciously crafted video file may lead to kernel\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46694: Andrey Labunets and Nikita Tarakanov\n\nAppleMobileFileIntegrity\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by enabling hardened runtime. \nCVE-2022-42865: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nAVEVideoEncoder\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-42848: ABC Research s.r.o\n\nCoreServices\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: Multiple issues were addressed by removing the\nvulnerable code. \nCVE-2022-42859: Mickey Jin (@patch1t), Csaba Fitzl (@theevilbit) of\nOffensive Security\n\nGPU Drivers\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to disclose kernel memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-46702: Xia0o0o0o of W4terDr0p, Sun Yat-sen University\n\nGraphics Driver\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42850: Willy R. Vasquez of The University of Texas at Austin\n\nGraphics Driver\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Parsing a maliciously crafted video file may lead to\nunexpected system termination\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42846: Willy R. Vasquez of The University of Texas at Austin\n\nImageIO\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46693: Mickey Jin (@patch1t)\n\nImageIO\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Parsing a maliciously crafted TIFF file may lead to\ndisclosure of user information\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42851: Mickey Jin (@patch1t)\n\nIOHIDFamily\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2022-42864: Tommy Muir (@Muirey03)\n\nIOMobileFrameBuffer\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: An out-of-bounds write issue was addressed with improved\ninput validation. \nCVE-2022-46690: John Aakerblom (@jaakerblom)\n\niTunes Store\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: A remote user may be able to cause unexpected app termination\nor arbitrary code execution\nDescription: An issue existed in the parsing of URLs. \nCVE-2022-42837: Weijia Dai (@dwj1210) of Momo Security\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: A race condition was addressed with additional\nvalidation. \nCVE-2022-46689: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Connecting to a malicious NFS server may lead to arbitrary\ncode execution with kernel privileges\nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-46701: Felix Poulin-Belanger\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: A remote user may be able to cause kernel code execution\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42842: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to break out of its sandbox\nDescription: This issue was addressed with improved checks. \nCVE-2022-42861: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to break out of its sandbox\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42844: pattern-f (@pattern_F_) of Ant Security Light-Year\nLab\n\nKernel\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app with root privileges may be able to execute arbitrary\ncode with kernel privileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42845: Adam Doup\u00e9 of ASU SEFCOM\n\nPhotos\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Shake-to-undo may allow a deleted photo to be re-surfaced\nwithout authentication\nDescription: The issue was addressed with improved bounds checks. \nCVE-2022-32943: an anonymous researcher\n\nppp\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to execute arbitrary code with kernel\nprivileges\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42840: an anonymous researcher\n\nPreferences\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to use arbitrary entitlements\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-42855: Ivan Fratric of Google Project Zero\n\nPrinting\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to bypass Privacy preferences\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2022-42862: Mickey Jin (@patch1t)\n\nSafari\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Visiting a website that frames malicious content may lead to\nUI spoofing\nDescription: A spoofing issue existed in the handling of URLs. \nCVE-2022-46695: KirtiKumar Anandrao Ramchandani\n\nSoftware Update\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: A user may be able to elevate privileges\nDescription: An access issue existed with privileged API calls. \nCVE-2022-42849: Mickey Jin (@patch1t)\n\nWeather\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: An app may be able to read sensitive location information\nDescription: The issue was addressed with improved handling of\ncaches. \nCVE-2022-42866: an anonymous researcher\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 245521\nCVE-2022-42867: Maddie Stone of Google Project Zero\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 245466\nCVE-2022-46691: an anonymous researcher\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may bypass Same\nOrigin Policy\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 246783\nCVE-2022-46692: KirtiKumar Anandrao Ramchandani\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may result in the\ndisclosure of process memory\nDescription: The issue was addressed with improved memory handling. \nCVE-2022-42852: hazbinhotel working with Trend Micro Zero Day\nInitiative\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nWebKit Bugzilla: 246942\nCVE-2022-46696: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 247562\nCVE-2022-46700: Samuel Gro\u00df of Google V8 Security\n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A logic issue was addressed with improved checks. \nCVE-2022-46698: Dohyun Lee (@l33d0hyun) of SSD Secure Disclosure Labs\n\u0026 DNSLab, Korea Univ. \n\nWebKit\nAvailable for: iPhone 8 and later, iPad Pro (all models), iPad Air\n3rd generation and later, iPad 5th generation and later, and iPad\nmini 5th generation and later\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 247420\nCVE-2022-46699: Samuel Gro\u00df of Google V8 Security\nWebKit Bugzilla: 244622\nCVE-2022-42863: an anonymous researcher\n\nAdditional recognition\n\nKernel\nWe would like to acknowledge Zweig of Kunlun Lab and pattern-f\n(@pattern_F_) of Ant Security Light-Year Lab for their assistance. \n\nSafari Extensions\nWe would like to acknowledge Oliver Dunk and Christian R. of\n1Password for their assistance. \n\nWebKit\nWe would like to acknowledge an anonymous researcher and scarlet for\ntheir assistance. \n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/  iTunes and Software Update on the\ndevice will automatically check Apple\u0027s update server on its weekly\nschedule. When an update is detected, it is downloaded and the option\nto be installed is presented to the user when the iOS device is\ndocked. We recommend applying the update immediately if possible. \nSelecting Don\u0027t Install will present the option the next time you\nconnect your iOS device.  The automatic update process may take up to\na week depending on the day that iTunes or the device checks for\nupdates. You may manually obtain the update via the Check for Updates\nbutton within iTunes, or the Software Update on your device.  To\ncheck that the iPhone, iPod touch, or iPad has been updated:  *\nNavigate to Settings * Select General * Select About. The version\nafter applying this update will be \"iOS 16.2 and iPadOS 16.2\". \nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEBP+4DupqR5Sgt1DB4RjMIDkeNxkFAmOZFYIACgkQ4RjMIDke\nNxmnGxAAuO8CwNDPYn+yyq7VIRB6+sCaV962MC2c7TFzU+R1waBBaG57NmJQeANL\n5645QVNL/5k0TaFiSV/dFg15YvBkLgC6JVHeKebYvboSHB0wsfxejfwYnYFLNc44\nxriqdQSarmp61Aw4270LFRjV1XokOGuEzo4U3InyhiawceAVR/qvteNDxETnsiAN\njyaPRnx1d2K22+VZAFRtVjLgFLkBwAguamuMe8vIaqP71giORZNfX+w+GduszAqY\nSpo5aPC8G56GmHGZnSid/utBa2CqXfVRMyUmnYYukPrYACdy4WkTKWOWCbPrCbkT\nVWc43jsWhPqsMFopyy4K+SaCZqNdcpIag8n5uvCsf0tLPIUPTMRXSj3w1WOq1++6\nNV3cZp6RaFVYJ3twU2D1q/Vj8VXILNPvECzGNlOxRJu0H7w3VD5ZdZ16Gsc8T62C\nER5CSJr49fj5ixLCP9HepmIK5Rz0UXMxmylUANk2h88HK9hr7/s1EOYtCVjTEYQ2\nc1ESd87HYz24iYnqaL5d8KDFiGQCVfuwnz2keWO7Lc2E+8OUqqYTWfCoSWprWnSU\nZR31xA+JnPSS4WBP4RzitUmAiP5sdulF0jg4IZ0y0RUwWI4lD4vv0z9glb++rRHh\nPJj9uuirFcmKJ1BSfEN1glq/gGW8SP1vuq1BU+fIdeHtw4XeeIw=\n=qV7H\n-----END PGP SIGNATURE-----\n\n\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: webkit2gtk3 security and bug fix update\nAdvisory ID:       RHSA-2023:2834-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2023:2834\nIssue date:        2023-05-16\nCVE Names:         CVE-2022-32886 CVE-2022-32888 CVE-2022-32923\n                   CVE-2022-42799 CVE-2022-42823 CVE-2022-42824\n                   CVE-2022-42826 CVE-2022-42852 CVE-2022-42863\n                   CVE-2022-42867 CVE-2022-46691 CVE-2022-46692\n                   CVE-2022-46698 CVE-2022-46699 CVE-2022-46700\n                   CVE-2023-23517 CVE-2023-23518 CVE-2023-25358\n                   CVE-2023-25360 CVE-2023-25361 CVE-2023-25362\n                   CVE-2023-25363\n====================================================================\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.8 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.38.5-1.el8.src.rpm\n\naarch64:\nwebkit2gtk3-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-devel-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.38.5-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.38.5-1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32886\nhttps://access.redhat.com/security/cve/CVE-2022-32888\nhttps://access.redhat.com/security/cve/CVE-2022-32923\nhttps://access.redhat.com/security/cve/CVE-2022-42799\nhttps://access.redhat.com/security/cve/CVE-2022-42823\nhttps://access.redhat.com/security/cve/CVE-2022-42824\nhttps://access.redhat.com/security/cve/CVE-2022-42826\nhttps://access.redhat.com/security/cve/CVE-2022-42852\nhttps://access.redhat.com/security/cve/CVE-2022-42863\nhttps://access.redhat.com/security/cve/CVE-2022-42867\nhttps://access.redhat.com/security/cve/CVE-2022-46691\nhttps://access.redhat.com/security/cve/CVE-2022-46692\nhttps://access.redhat.com/security/cve/CVE-2022-46698\nhttps://access.redhat.com/security/cve/CVE-2022-46699\nhttps://access.redhat.com/security/cve/CVE-2022-46700\nhttps://access.redhat.com/security/cve/CVE-2023-23517\nhttps://access.redhat.com/security/cve/CVE-2023-23518\nhttps://access.redhat.com/security/cve/CVE-2023-25358\nhttps://access.redhat.com/security/cve/CVE-2023-25360\nhttps://access.redhat.com/security/cve/CVE-2023-25361\nhttps://access.redhat.com/security/cve/CVE-2023-25362\nhttps://access.redhat.com/security/cve/CVE-2023-25363\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.8_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "VULHUB",
        "id": "VHN-439656"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-42852"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170319"
      },
      {
        "db": "PACKETSTORM",
        "id": "170317"
      },
      {
        "db": "PACKETSTORM",
        "id": "170311"
      },
      {
        "db": "PACKETSTORM",
        "id": "172380"
      }
    ],
    "trust": 2.25
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-439656",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439656"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-42852",
        "trust": 3.9
      },
      {
        "db": "PACKETSTORM",
        "id": "170319",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "170350",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170431",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0118",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1322",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0058",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1216",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "170317",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170311",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170349",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170318",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170314",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170312",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-439656",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-42852",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172380",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439656"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-42852"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170319"
      },
      {
        "db": "PACKETSTORM",
        "id": "170317"
      },
      {
        "db": "PACKETSTORM",
        "id": "170311"
      },
      {
        "db": "PACKETSTORM",
        "id": "172380"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "id": "VAR-202212-1523",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439656"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:49:42.660000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT213536 Apple\u00a0 Security update",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT213530"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-200",
        "trust": 1.0
      },
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      },
      {
        "problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/20"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/21"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/23"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/26"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/27"
      },
      {
        "trust": 2.5,
        "url": "http://seclists.org/fulldisclosure/2022/dec/28"
      },
      {
        "trust": 2.4,
        "url": "https://support.apple.com/en-us/ht213536"
      },
      {
        "trust": 2.4,
        "url": "https://security.gentoo.org/glsa/202305-32"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213530"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213531"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213532"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213535"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht213537"
      },
      {
        "trust": 1.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42852"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-42852/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-wpe-webkit-multiple-vulnerabilities-40179"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170431/ubuntu-security-notice-usn-5797-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-macos-multiple-vulnerabilities-of-december-2022-40105"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1216"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170319/apple-security-advisory-2022-12-13-9.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0118"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1322"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0058"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170350/debian-security-advisory-5309-1.html"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46692"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42856"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46698"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42867"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46699"
      },
      {
        "trust": 0.3,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.3,
        "url": "https://support.apple.com/en-us/ht201222."
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42863"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46700"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46691"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42849"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42848"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42842"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42855"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42845"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42851"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42843"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-46696"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213537."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42864"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213535."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42850"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213530."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32943"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42846"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42837"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.8_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-23517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46700"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-23518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42824"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:2834"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25362"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46692"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-25360"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46691"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42863"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42867"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46699"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32886"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42852"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42823"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-439656"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-42852"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170319"
      },
      {
        "db": "PACKETSTORM",
        "id": "170317"
      },
      {
        "db": "PACKETSTORM",
        "id": "170311"
      },
      {
        "db": "PACKETSTORM",
        "id": "172380"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-439656"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-42852"
      },
      {
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "db": "PACKETSTORM",
        "id": "170319"
      },
      {
        "db": "PACKETSTORM",
        "id": "170317"
      },
      {
        "db": "PACKETSTORM",
        "id": "170311"
      },
      {
        "db": "PACKETSTORM",
        "id": "172380"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-12-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-439656"
      },
      {
        "date": "2022-12-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-42852"
      },
      {
        "date": "2023-01-02T14:19:00",
        "db": "PACKETSTORM",
        "id": "170349"
      },
      {
        "date": "2022-12-22T02:13:43",
        "db": "PACKETSTORM",
        "id": "170319"
      },
      {
        "date": "2022-12-22T02:12:53",
        "db": "PACKETSTORM",
        "id": "170317"
      },
      {
        "date": "2022-12-22T02:10:24",
        "db": "PACKETSTORM",
        "id": "170311"
      },
      {
        "date": "2023-05-16T17:10:07",
        "db": "PACKETSTORM",
        "id": "172380"
      },
      {
        "date": "2022-12-13T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      },
      {
        "date": "2023-11-29T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "date": "2022-12-15T19:15:24.797000",
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-439656"
      },
      {
        "date": "2022-12-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-42852"
      },
      {
        "date": "2023-05-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      },
      {
        "date": "2023-11-29T03:37:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      },
      {
        "date": "2025-04-21T15:15:53.250000",
        "db": "NVD",
        "id": "CVE-2022-42852"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Vulnerabilities in multiple Apple products",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-023611"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202212-3045"
      }
    ],
    "trust": 0.6
  }
}

VAR-202002-1480

Vulnerability from variot - Updated: 2025-12-22 22:49

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. Apple iPadOS is an operating system for iPad tablets. A buffer error vulnerability exists in the WebKit component of several Apple products. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2020-1-29-2 iCloud for Windows 10.9.2

iCloud for Windows 10.9.2 is now available and addresses the following:

ImageIO Available for: Windows 10 and later via the Microsoft Store Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2020-3826: Samuel Groß of Google Project Zero

libxml2 Available for: Windows 10 and later via the Microsoft Store Impact: Processing maliciously crafted XML may lead to an unexpected application termination or arbitrary code execution Description: A buffer overflow was addressed with improved size validation. CVE-2020-3865: Ryan Pickren (ryanpickren.com)

Installation note:

iCloud for Windows 10.9.2 may be obtained from: https://support.apple.com/HT204283

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/ -----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl4x9jMACgkQBz4uGe3y 0M1Apw/+PrQvBheHkxIo2XjPOyTxO+M8mlaU+6gY7Ue14zivPO20JqRLb34FyNfh iE+RSJ3NB/0cdZIUH1xcrKzK+tmVFVETJaBmLmoTHBy3946DQtUvditLfTHYnYzC peJbdG4UyevVwf/AoED5iI89lf/ADOWm9Xu0LVtvDKyTAFewQp9oOlG731twL9iI 6ojuzYokYzJSWcDlLMTFB4sDpZsNEz2Crf+WZ44r5bHKcSTi7HzS+OPueQ6dSdqi Y9ioDv/SB0dnLJZE2wq6eaFL2t7eXelYUSL7SekXI4aYQkhaOQFabutFuYNoOX4e +ctnbSdVT5WjG7tyg9L7bl4m1q8GgH43OLBmA1Z/gps004PHMQ87cRRjvGGKIQOf YMI0VBqFc6cAnDYh4Oun31gbg9Y1llYYwTQex7gjx9U+v3FKOaxWxQg8N9y4d2+v qsStr7HKVKcRE/LyEx4fA44VoKNHyHZ4LtQSeX998MTapyH5XbbHEWr/K4TcJ8ij 6Zv/GkUKeINDJbRFhiMJtGThTw5dba5sfHfVv88NrbNYcwwVQrvlkfMq8Jrn0YEf rahjCDLigXXbyaqxM57feJ9+y6jHpULeywomGv+QEzyALTdGKIaq7w1pwLdOHizi Lcxvr8FxmUxydrvFJSUDRa9ELigIsLmgPB3l1UiUmd3AQ38ymJw= =tRpr -----END PGP SIGNATURE-----= . Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-* 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. ------------------------------------------------------------------------ WebKitGTK and WPE WebKit Security Advisory WSA-2020-0002


Date reported : February 14, 2020 Advisory ID : WSA-2020-0002 WebKitGTK Advisory URL : https://webkitgtk.org/security/WSA-2020-0002.html WPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2020-0002.html CVE identifiers : CVE-2020-3862, CVE-2020-3864, CVE-2020-3865, CVE-2020-3867, CVE-2020-3868.

Several vulnerabilities were discovered in WebKitGTK and WPE WebKit.

CVE-2020-3862 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Srikanth Gatta of Google Chrome.

CVE-2020-3864 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Ryan Pickren (ryanpickren.com).

CVE-2020-3865 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Ryan Pickren (ryanpickren.com).

CVE-2020-3867 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to an anonymous researcher.

CVE-2020-3868 Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before 2.26.4. Credit to Marcin Towalski of Cisco Talos.

We recommend updating to the latest stable versions of WebKitGTK and WPE WebKit. It is the best way to ensure that you are running safe versions of WebKit. Please check our websites for information about the latest stable releases.

Further information about WebKitGTK and WPE WebKit security advisories can be found at: https://webkitgtk.org/security.html or https://wpewebkit.org/security/.

The WebKitGTK and WPE WebKit team, February 14, 2020

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202002-1480",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.5"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.4"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.17"
      },
      {
        "model": "leap",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "opensuse",
        "version": "15.1"
      },
      {
        "model": "icloud",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2020-3868",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "COMPLETE",
            "baseScore": 9.3,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-3868",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "COMPLETE",
            "baseScore": 9.3,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 8.6,
            "id": "VHN-181993",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2020-3868",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-3868",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202001-1451",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-181993",
            "trust": 0.1,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-3868",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.3.1 and iPadOS 13.3.1, tvOS 13.3.1, Safari 13.0.5, iTunes for Windows 12.10.4, iCloud for Windows 11.0, iCloud for Windows 7.17. Processing maliciously crafted web content may lead to arbitrary code execution. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. Apple iPadOS is an operating system for iPad tablets. A buffer error vulnerability exists in the WebKit component of several Apple products. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2020-1-29-2 iCloud for Windows 10.9.2\n\niCloud for Windows 10.9.2 is now available and addresses the\nfollowing:\n\nImageIO\nAvailable for: Windows 10 and later via the Microsoft Store\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2020-3826: Samuel Gro\u00df of Google Project Zero\n\nlibxml2\nAvailable for: Windows 10 and later via the Microsoft Store\nImpact: Processing maliciously crafted XML may lead to an unexpected\napplication termination or arbitrary code execution\nDescription: A buffer overflow was addressed with improved size\nvalidation. \nCVE-2020-3865: Ryan Pickren (ryanpickren.com)\n\nInstallation note:\n\niCloud for Windows 10.9.2 may be obtained from:\nhttps://support.apple.com/HT204283\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEM5FaaFRjww9EJgvRBz4uGe3y0M0FAl4x9jMACgkQBz4uGe3y\n0M1Apw/+PrQvBheHkxIo2XjPOyTxO+M8mlaU+6gY7Ue14zivPO20JqRLb34FyNfh\niE+RSJ3NB/0cdZIUH1xcrKzK+tmVFVETJaBmLmoTHBy3946DQtUvditLfTHYnYzC\npeJbdG4UyevVwf/AoED5iI89lf/ADOWm9Xu0LVtvDKyTAFewQp9oOlG731twL9iI\n6ojuzYokYzJSWcDlLMTFB4sDpZsNEz2Crf+WZ44r5bHKcSTi7HzS+OPueQ6dSdqi\nY9ioDv/SB0dnLJZE2wq6eaFL2t7eXelYUSL7SekXI4aYQkhaOQFabutFuYNoOX4e\n+ctnbSdVT5WjG7tyg9L7bl4m1q8GgH43OLBmA1Z/gps004PHMQ87cRRjvGGKIQOf\nYMI0VBqFc6cAnDYh4Oun31gbg9Y1llYYwTQex7gjx9U+v3FKOaxWxQg8N9y4d2+v\nqsStr7HKVKcRE/LyEx4fA44VoKNHyHZ4LtQSeX998MTapyH5XbbHEWr/K4TcJ8ij\n6Zv/GkUKeINDJbRFhiMJtGThTw5dba5sfHfVv88NrbNYcwwVQrvlkfMq8Jrn0YEf\nrahjCDLigXXbyaqxM57feJ9+y6jHpULeywomGv+QEzyALTdGKIaq7w1pwLdOHizi\nLcxvr8FxmUxydrvFJSUDRa9ELigIsLmgPB3l1UiUmd3AQ38ymJw=\n=tRpr\n-----END PGP SIGNATURE-----=\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. ------------------------------------------------------------------------\nWebKitGTK and WPE WebKit Security Advisory                 WSA-2020-0002\n------------------------------------------------------------------------\n\nDate reported           : February 14, 2020\nAdvisory ID             : WSA-2020-0002\nWebKitGTK Advisory URL  : https://webkitgtk.org/security/WSA-2020-0002.html\nWPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2020-0002.html\nCVE identifiers         : CVE-2020-3862, CVE-2020-3864, CVE-2020-3865,\n                          CVE-2020-3867, CVE-2020-3868. \n\nSeveral vulnerabilities were discovered in WebKitGTK and WPE WebKit. \n\nCVE-2020-3862\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Srikanth Gatta of Google Chrome. \n\nCVE-2020-3864\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Ryan Pickren (ryanpickren.com). \n\nCVE-2020-3865\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Ryan Pickren (ryanpickren.com). \n\nCVE-2020-3867\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to an anonymous researcher. \n\nCVE-2020-3868\n    Versions affected: WebKitGTK before 2.26.4 and WPE WebKit before\n    2.26.4. \n    Credit to Marcin Towalski of Cisco Talos. \n\n\nWe recommend updating to the latest stable versions of WebKitGTK and WPE\nWebKit. It is the best way to ensure that you are running safe versions\nof WebKit. Please check our websites for information about the latest\nstable releases. \n\nFurther information about WebKitGTK and WPE WebKit security advisories\ncan be found at: https://webkitgtk.org/security.html or\nhttps://wpewebkit.org/security/. \n\nThe WebKitGTK and WPE WebKit team,\nFebruary 14, 2020\n\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      },
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-3868",
        "trust": 2.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "156392",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0567",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0538",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0355",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0677",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156153",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156400",
        "trust": 0.6
      },
      {
        "db": "TALOS",
        "id": "TALOS-2019-0967",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-15279",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-181993",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3868",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "156152",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "id": "VAR-202002-1480",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:49:31.942000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=111065"
      },
      {
        "title": "Ubuntu Security Notice: webkit2gtk vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4281-1"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2020-3868 log"
      },
      {
        "title": "Debian Security Advisories: DSA-4627-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=78d59627ce9fc1018e3643bc1e16ec00"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202002-10] webkit2gtk: multiple issues",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202002-10"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      },
      {
        "title": "Threatpost",
        "trust": 0.1,
        "url": "https://threatpost.com/apple-patches-ios-device-tracking/152364/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.1
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210920"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210947"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210948"
      },
      {
        "trust": 1.8,
        "url": "http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00004.html"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht210920"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210947"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0538/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156392/webkitgtk-wpe-webkit-dos-logic-issue-code-execution.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156400/ubuntu-security-notice-usn-4281-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156153/apple-security-advisory-2020-1-29-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://talosintelligence.com/vulnerability_reports/talos-2019-0967"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-five-vulnerabilities-31619"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0567/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0677/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0355/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.4,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4281-1/"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3826"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht204283"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3825"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://wpewebkit.org/security/wsa-2020-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2020-0002.html"
      },
      {
        "trust": 0.1,
        "url": "https://wpewebkit.org/security/."
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-02-27T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "date": "2020-02-27T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-01-30T14:46:23",
        "db": "PACKETSTORM",
        "id": "156152"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-02-17T15:02:22",
        "db": "PACKETSTORM",
        "id": "156392"
      },
      {
        "date": "2020-01-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      },
      {
        "date": "2020-02-27T21:15:18.210000",
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181993"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3868"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      },
      {
        "date": "2024-11-21T05:31:52.097000",
        "db": "NVD",
        "id": "CVE-2020-3868"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple Apple Product Buffer Error Vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202001-1451"
      }
    ],
    "trust": 0.6
  }
}

VAR-202210-1070

Vulnerability from variot - Updated: 2025-12-22 22:47

An issue was discovered in libxml2 before 2.10.3. Certain invalid XML entity definitions can corrupt a hash table key, potentially leading to subsequent logic errors. In one case, a double-free can be provoked. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. Description:

Red Hat Advanced Cluster Management for Kubernetes 2.6.4 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/

Issue addressed:

  • RHACM 2.6.4 images (BZ# 2153382)

Security fixes:

  • CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process

  • Bugs fixed (https://bugzilla.redhat.com/):

2150323 - CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process 2153382 - RHACM 2.6.4 images

  1. Description:

Network observability is an OpenShift operator that provides a monitoring pipeline to collect and enrich network flows that are produced by the Network observability eBPF agent.

The operator provides dashboards, metrics, and keeps flows accessible in a queryable log store, Grafana Loki. When a FlowCollector is deployed, new dashboards are available in the Console. Solution:

Apply this errata by upgrading Network observability operator 1.0 to 1.1

  1. Bugs fixed (https://bugzilla.redhat.com/):

2169468 - CVE-2023-0813 network-observability-console-plugin-container: setting Loki authToken configuration to DISABLE or HOST mode leads to authentication longer being enforced

  1. Description:

Red Hat Openshift GitOps is a declarative way to implement continuous deployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):

2160492 - CVE-2023-22482 ArgoCD: JWT audience claim is not verified

  1. Description:

Migration Toolkit for Applications 6.0.1 Images

Security Fix(es) from Bugzilla:

  • loader-utils: prototype pollution in function parseQuery in parseQuery.js (CVE-2022-37601)

  • Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing (CVE-2022-42920)

  • gin: Unsanitized input in the default logger in github.com/gin-gonic/gin (CVE-2020-36567)

  • glob-parent: Regular Expression Denial of Service (CVE-2021-35065)

  • express: "qs" prototype poisoning causes the hang of the node process (CVE-2022-24999)

  • loader-utils:Regular expression denial of service (CVE-2022-37603)

  • golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests (CVE-2022-41717)

  • json5: Prototype Pollution in JSON5 via Parse Method (CVE-2022-46175)

For more details about the security issue(s), including the impact, a CVSS score, and other related information, refer to the CVE page(s) listed in the References section. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. JIRA issues fixed (https://issues.jboss.org/):

MTA-103 - MTA 6.0.1 Installation failed with CrashLoop Error for UI Pod MTA-106 - Implement ability for windup addon image pull policy to be configurable MTA-122 - MTA is upgrading automatically ignoring 'Manual' setting MTA-123 - MTA Becomes unusable when running bulk binary analysis MTA-127 - After upgrading MTA operator from 6.0.0 to 6.0.1 and running analysis , task pods starts failing MTA-131 - Analysis stops working after MTA upgrade from 6.0.0 to 6.0.1 MTA-36 - Can't disable a proxy if it has an invalid configuration MTA-44 - Make RWX volumes optional. MTA-49 - Uploaded a local binary when return back to the page the UI should show green bar and correct % MTA-59 - Getting error 401 if deleting many credentials quickly MTA-65 - Set windup addon image pull policy to be controlled by the global image_pull_policy parameter MTA-72 - CVE-2022-46175 mta-ui-container: json5: Prototype Pollution in JSON5 via Parse Method [mta-6] MTA-73 - CVE-2022-37601 mta-ui-container: loader-utils: prototype pollution in function parseQuery in parseQuery.js [mta-6] MTA-74 - CVE-2020-36567 mta-windup-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-76 - CVE-2022-37603 mta-ui-container: loader-utils:Regular expression denial of service [mta-6] MTA-77 - CVE-2020-36567 mta-hub-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-80 - CVE-2021-35065 mta-ui-container: glob-parent: Regular Expression Denial of Service [mta-6] MTA-82 - CVE-2022-42920 org.jboss.windup-windup-cli-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0] MTA-85 - CVE-2022-24999 mta-ui-container: express: "qs" prototype poisoning causes the hang of the node process [mta-6] MTA-88 - CVE-2020-36567 mta-admin-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6] MTA-92 - CVE-2022-42920 org.jboss.windup.plugin-windup-maven-plugin-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0] MTA-96 - [UI] Maven -> "Local artifact repository" textbox can be checked and has no tooltip

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: libxml2 security update Advisory ID: RHSA-2023:0173-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0173 Issue date: 2023-01-16 CVE Names: CVE-2022-40303 CVE-2022-40304 ==================================================================== 1. Summary:

An update for libxml2 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

The libxml2 library is a development toolbox providing the implementation of various XML standards.

Security Fix(es):

  • libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)

  • libxml2: dict corruption caused by entity reference cycles (CVE-2022-40304)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

The desktop must be restarted (log out, then log back in) for this update to take effect.

  1. Bugs fixed (https://bugzilla.redhat.com/):

2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

aarch64: libxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm libxml2-debugsource-2.9.7-15.el8_7.1.aarch64.rpm libxml2-devel-2.9.7-15.el8_7.1.aarch64.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm

ppc64le: libxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm libxml2-debugsource-2.9.7-15.el8_7.1.ppc64le.rpm libxml2-devel-2.9.7-15.el8_7.1.ppc64le.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm

s390x: libxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm libxml2-debugsource-2.9.7-15.el8_7.1.s390x.rpm libxml2-devel-2.9.7-15.el8_7.1.s390x.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm

x86_64: libxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm libxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm libxml2-debugsource-2.9.7-15.el8_7.1.i686.rpm libxml2-debugsource-2.9.7-15.el8_7.1.x86_64.rpm libxml2-devel-2.9.7-15.el8_7.1.i686.rpm libxml2-devel-2.9.7-15.el8_7.1.x86_64.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm

Red Hat Enterprise Linux BaseOS (v. 8):

Source: libxml2-2.9.7-15.el8_7.1.src.rpm

aarch64: libxml2-2.9.7-15.el8_7.1.aarch64.rpm libxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm libxml2-debugsource-2.9.7-15.el8_7.1.aarch64.rpm python3-libxml2-2.9.7-15.el8_7.1.aarch64.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm

ppc64le: libxml2-2.9.7-15.el8_7.1.ppc64le.rpm libxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm libxml2-debugsource-2.9.7-15.el8_7.1.ppc64le.rpm python3-libxml2-2.9.7-15.el8_7.1.ppc64le.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm

s390x: libxml2-2.9.7-15.el8_7.1.s390x.rpm libxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm libxml2-debugsource-2.9.7-15.el8_7.1.s390x.rpm python3-libxml2-2.9.7-15.el8_7.1.s390x.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm

x86_64: libxml2-2.9.7-15.el8_7.1.i686.rpm libxml2-2.9.7-15.el8_7.1.x86_64.rpm libxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm libxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm libxml2-debugsource-2.9.7-15.el8_7.1.i686.rpm libxml2-debugsource-2.9.7-15.el8_7.1.x86_64.rpm python3-libxml2-2.9.7-15.el8_7.1.x86_64.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm python3-libxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-40303 https://access.redhat.com/security/cve/CVE-2022-40304 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBY8UoQ9zjgjWX9erEAQgOHQ/+Ns7MY8MsoyU3wlWkuTW5mCenVYaSQa90 nHACMcvLgOKjM61s7FTXHnvV52TKj/+kZRToW2MCOTfuLsYnP0bZ+DFLkhDxoIGR wN6X2Mgh/vtBmdLGtW8bjclpJuYLoGrjfoigFOZgXbRrKBNYLZqLPNutHzcF1IB2 hxdTDn7W+RNjCiP8+l+cTGYx0A9e1rYkCEx5B8qKfJY11/ojBTvxMf2jVnkFM9gz ZwVCDtUyO7S7B5l6OqvH9qcR8dBOMw5KpaE4wGc+RF9iYI3t68xJlB2bj21Eb1oW I4OwkkOh9i96f2XtusnTZIdJWVEMHJ3ZjM8a40nB7OzV0zSRRml61CLvLur6YAdo nxQ3bstsq2+NhK/J0pHLUaVLQxeePgvHICJBIBXRV/bFHZw3qADo08FmvcVh4y9t HSyYP6ZdofwxeR6elSke2cM57RWIcDVB8+o6ESUN4q5QMp6xjmA+82tHLmbguwyb RMTW46jCZ3tZOo5+zIXBGlwvMZGv5PDzzgjwEboxBoWTGegBdPJkNNmezj9pZcyB 0l2Uh2LtC/uPbqBFzsPy94pyEd4VoRAY5/RBS+PgLCJm4o2qsaTN75jqHpSQXgw8 CfZT3+0XnYvsYHBt8jtiVUpHJpbfh9vNNjXzcLO/JKCv8NW3So1MfV2A+mT/mDmh nCQ8kAI62fw=pLiQ -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Bugs fixed (https://bugzilla.redhat.com/):

2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be 2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents 2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets

CVE-2022-40303

Maddie Stone discovered that missing safety checks in several
functions can result in integer overflows when parsing a XML
document with the XML_PARSE_HUGE option enabled.

CVE-2022-40304

Ned Williamson and Nathan Wachholz discovered a vulnerability when
handling detection of entity reference cycles, which may result in
corrupted dictionary entries. This flaw may lead to logic errors,
including memory errors like double free flaws.

For the stable distribution (bullseye), these problems have been fixed in version 2.9.10+dfsg-6.7+deb11u3.

We recommend that you upgrade your libxml2 packages.

For the detailed security status of libxml2 please refer to its security tracker page at: https://security-tracker.debian.org/tracker/libxml2

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmNmu0JfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0QPYQ/5AQtGMTA78fVUCt9lBuzBbGe5F/2jOHmP2CIlEbV3RnHtZe9wMGlcYvSq 4hDRvu8OEHmSwqHyRbghQt2oCvSLhtIhqxf0Itro5Pv8PWrhoFy6TdNX9tW+ARqz TIF93hiHeTuQO+XqkTct50KUlTB6ccZDREqx7tGK4B/8fHM+34vPca0nRpWHUaXM aUJtgGNRvO+th3qulMKGGzE2K05678D1RYngF3T/NJ0+aobfdd4dVOobyVqotF76 thwO04V54oXVNuaRUJ5ItGKcY6sLx0l0cJ+HysB93xS2TGDAZhayFkMRKTk7hfOo B08LPMnYjcbb+A1cezT+XpgcTOir+pZXuNSUuB7Q1vwERXGnhdMC0Z8hwTL5o18A h4S6fk7vPSIIOzOGkAWYQ0mjsG5c45rXDVCp4u9LwIqIEAq/pXI7UTp+kJFSRerk Q8lWBuTiwjlaaFGslANxmbPJtfS4PHFFLPnjJJArhJI0CRi5cq3Htruw/5sVgwTj Xu+d1ht/jmmNfNS4qTDBpq1wJVmpc82qrxT8uDLp1BK2nBx6tSnsDViENIPBZE3O OzmIYYCAuXvhKK4QG9H/02hBBhwSFVI/TnFzFhF8KyA95VXCYpBkUpweU0in4PuK l4/gd0lPZGeJDw7VI+p7HG5GeOA/7d+NkvwrHaHe4iShFfHYZxI\x8etg -----END PGP SIGNATURE----- . Description:

Red Hat OpenShift Service Mesh is the Red Hat distribution of the Istio service mesh project, tailored for installation into an on-premise OpenShift Container Platform installation.

This advisory covers container images for the release. Bugs fixed (https://bugzilla.redhat.com/):

2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests

  1. JIRA issues fixed (https://issues.jboss.org/):

OSSM-1330 - Allow specifying secret as pilot server cert when using CertificateAuthority: Custom OSSM-2342 - Run OSSM operator on infrastructure nodes OSSM-2371 - Kiali in read-only mode still can change the log level of the envoy proxies OSSM-2373 - Can't login to Kiali with "Error trying to get OAuth metadata" OSSM-2374 - Deleting a SMM also deletes the SMMR in OpenShift Service Mesh OSSM-2492 - Default tolerations in SMCP not passed to Jaeger OSSM-2493 - Default nodeSelector and tolerations in SMCP not passed to Kiali OSSM-3317 - Error: deployment.accessible_namespaces set to ['**']

6

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202210-1070",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "16.2"
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "clustered data ontap antivirus connector",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "clustered data ontap",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.0"
      },
      {
        "model": "smi-s provider",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.7.2"
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.7.2"
      },
      {
        "model": "libxml2",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "xmlsoft",
        "version": "2.10.3"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.7.2"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "9.2"
      },
      {
        "model": "manageability software development kit",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "active iq unified manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "snapmanager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.6.2"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "171025"
      },
      {
        "db": "PACKETSTORM",
        "id": "171016"
      },
      {
        "db": "PACKETSTORM",
        "id": "170753"
      },
      {
        "db": "PACKETSTORM",
        "id": "171144"
      },
      {
        "db": "PACKETSTORM",
        "id": "170555"
      },
      {
        "db": "PACKETSTORM",
        "id": "171042"
      },
      {
        "db": "PACKETSTORM",
        "id": "171040"
      },
      {
        "db": "PACKETSTORM",
        "id": "171470"
      }
    ],
    "trust": 0.8
  },
  "cve": "CVE-2022-40304",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2022-40304",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-40304",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
            "id": "CVE-2022-40304",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202210-1022",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "An issue was discovered in libxml2 before 2.10.3. Certain invalid XML entity definitions can corrupt a hash table key, potentially leading to subsequent logic errors. In one case, a double-free can be provoked. It is written in C language and can be called by many languages, such as C language, C++, XSH. Currently there is no information about this vulnerability, please keep an eye on CNNVD or vendor announcements. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.6.4 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/\n\nIssue addressed:\n\n* RHACM 2.6.4 images (BZ# 2153382)\n\nSecurity fixes:\n\n* CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the\nnode process\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2150323 - CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the node process\n2153382 - RHACM 2.6.4 images\n\n5. Description:\n\nNetwork observability is an OpenShift operator that provides a monitoring\npipeline to collect and enrich network flows that are produced by the\nNetwork observability eBPF agent. \n\nThe operator provides dashboards, metrics, and keeps flows accessible in a\nqueryable log store, Grafana Loki. When a FlowCollector is deployed, new\ndashboards are available in the Console. Solution:\n\nApply this errata by upgrading Network observability operator 1.0 to 1.1\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2169468 - CVE-2023-0813 network-observability-console-plugin-container: setting Loki authToken configuration to DISABLE or HOST mode leads to authentication longer being enforced\n\n5. Description:\n\nRed Hat Openshift GitOps is a declarative way to implement continuous\ndeployment for cloud native applications. Bugs fixed (https://bugzilla.redhat.com/):\n\n2160492 - CVE-2023-22482 ArgoCD: JWT audience claim is not verified\n\n5. Description:\n\nMigration Toolkit for Applications 6.0.1 Images\n\nSecurity Fix(es) from Bugzilla:\n\n* loader-utils: prototype pollution in function parseQuery in parseQuery.js\n(CVE-2022-37601)\n\n* Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds\nwriting (CVE-2022-42920)\n\n* gin: Unsanitized input in the default logger in github.com/gin-gonic/gin\n(CVE-2020-36567)\n\n* glob-parent: Regular Expression Denial of Service (CVE-2021-35065)\n\n* express: \"qs\" prototype poisoning causes the hang of the node process\n(CVE-2022-24999)\n\n* loader-utils:Regular expression denial of service (CVE-2022-37603)\n\n* golang: net/http: An attacker can cause excessive memory growth in a Go\nserver accepting HTTP/2 requests (CVE-2022-41717)\n\n* json5: Prototype Pollution in JSON5 via Parse Method (CVE-2022-46175)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, and other related information, refer to the CVE page(s) listed in\nthe References section. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. JIRA issues fixed (https://issues.jboss.org/):\n\nMTA-103 - MTA 6.0.1 Installation failed with CrashLoop Error for UI Pod\nMTA-106 - Implement ability for windup addon image pull policy to be configurable\nMTA-122 - MTA is upgrading automatically ignoring \u0027Manual\u0027 setting\nMTA-123 - MTA Becomes unusable when running bulk binary analysis\nMTA-127 - After upgrading MTA operator from 6.0.0 to 6.0.1 and running analysis , task pods starts failing \nMTA-131 - Analysis stops working after MTA upgrade from 6.0.0 to 6.0.1\nMTA-36 - Can\u0027t disable a proxy if it has an invalid configuration\nMTA-44 - Make RWX volumes optional. \nMTA-49 - Uploaded a local binary when return back to the page the UI should show green bar and correct %\nMTA-59 - Getting error 401 if deleting many credentials quickly\nMTA-65 - Set windup addon image pull policy to be controlled by the global image_pull_policy parameter\nMTA-72 - CVE-2022-46175 mta-ui-container: json5: Prototype Pollution in JSON5 via Parse Method [mta-6]\nMTA-73 - CVE-2022-37601 mta-ui-container: loader-utils: prototype pollution in function parseQuery in parseQuery.js [mta-6]\nMTA-74 - CVE-2020-36567 mta-windup-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-76 - CVE-2022-37603 mta-ui-container: loader-utils:Regular expression denial of service [mta-6]\nMTA-77 - CVE-2020-36567 mta-hub-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-80 - CVE-2021-35065 mta-ui-container: glob-parent: Regular Expression Denial of Service [mta-6]\nMTA-82 - CVE-2022-42920 org.jboss.windup-windup-cli-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0]\nMTA-85 - CVE-2022-24999 mta-ui-container: express: \"qs\" prototype poisoning causes the hang of the node process [mta-6]\nMTA-88 - CVE-2020-36567 mta-admin-addon-container: gin: Unsanitized input in the default logger in github.com/gin-gonic/gin [mta-6]\nMTA-92 - CVE-2022-42920 org.jboss.windup.plugin-windup-maven-plugin-parent: Apache-Commons-BCEL: arbitrary bytecode produced via out-of-bounds writing [mta-6.0]\nMTA-96 - [UI] Maven -\u003e \"Local artifact repository\" textbox can be checked and has no tooltip\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: libxml2 security update\nAdvisory ID:       RHSA-2023:0173-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2023:0173\nIssue date:        2023-01-16\nCVE Names:         CVE-2022-40303 CVE-2022-40304\n====================================================================\n1. Summary:\n\nAn update for libxml2 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe libxml2 library is a development toolbox providing the implementation\nof various XML standards. \n\nSecurity Fix(es):\n\n* libxml2: integer overflows with XML_PARSE_HUGE (CVE-2022-40303)\n\n* libxml2: dict corruption caused by entity reference cycles\n(CVE-2022-40304)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe desktop must be restarted (log out, then log back in) for this update\nto take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\naarch64:\nlibxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.aarch64.rpm\nlibxml2-devel-2.9.7-15.el8_7.1.aarch64.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm\n\nppc64le:\nlibxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.ppc64le.rpm\nlibxml2-devel-2.9.7-15.el8_7.1.ppc64le.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm\n\ns390x:\nlibxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.s390x.rpm\nlibxml2-devel-2.9.7-15.el8_7.1.s390x.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm\n\nx86_64:\nlibxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm\nlibxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.i686.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.x86_64.rpm\nlibxml2-devel-2.9.7-15.el8_7.1.i686.rpm\nlibxml2-devel-2.9.7-15.el8_7.1.x86_64.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nlibxml2-2.9.7-15.el8_7.1.src.rpm\n\naarch64:\nlibxml2-2.9.7-15.el8_7.1.aarch64.rpm\nlibxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.aarch64.rpm\npython3-libxml2-2.9.7-15.el8_7.1.aarch64.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.aarch64.rpm\n\nppc64le:\nlibxml2-2.9.7-15.el8_7.1.ppc64le.rpm\nlibxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.ppc64le.rpm\npython3-libxml2-2.9.7-15.el8_7.1.ppc64le.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.ppc64le.rpm\n\ns390x:\nlibxml2-2.9.7-15.el8_7.1.s390x.rpm\nlibxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.s390x.rpm\npython3-libxml2-2.9.7-15.el8_7.1.s390x.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.s390x.rpm\n\nx86_64:\nlibxml2-2.9.7-15.el8_7.1.i686.rpm\nlibxml2-2.9.7-15.el8_7.1.x86_64.rpm\nlibxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm\nlibxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.i686.rpm\nlibxml2-debugsource-2.9.7-15.el8_7.1.x86_64.rpm\npython3-libxml2-2.9.7-15.el8_7.1.x86_64.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.i686.rpm\npython3-libxml2-debuginfo-2.9.7-15.el8_7.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-40303\nhttps://access.redhat.com/security/cve/CVE-2022-40304\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY8UoQ9zjgjWX9erEAQgOHQ/+Ns7MY8MsoyU3wlWkuTW5mCenVYaSQa90\nnHACMcvLgOKjM61s7FTXHnvV52TKj/+kZRToW2MCOTfuLsYnP0bZ+DFLkhDxoIGR\nwN6X2Mgh/vtBmdLGtW8bjclpJuYLoGrjfoigFOZgXbRrKBNYLZqLPNutHzcF1IB2\nhxdTDn7W+RNjCiP8+l+cTGYx0A9e1rYkCEx5B8qKfJY11/ojBTvxMf2jVnkFM9gz\nZwVCDtUyO7S7B5l6OqvH9qcR8dBOMw5KpaE4wGc+RF9iYI3t68xJlB2bj21Eb1oW\nI4OwkkOh9i96f2XtusnTZIdJWVEMHJ3ZjM8a40nB7OzV0zSRRml61CLvLur6YAdo\nnxQ3bstsq2+NhK/J0pHLUaVLQxeePgvHICJBIBXRV/bFHZw3qADo08FmvcVh4y9t\nHSyYP6ZdofwxeR6elSke2cM57RWIcDVB8+o6ESUN4q5QMp6xjmA+82tHLmbguwyb\nRMTW46jCZ3tZOo5+zIXBGlwvMZGv5PDzzgjwEboxBoWTGegBdPJkNNmezj9pZcyB\n0l2Uh2LtC/uPbqBFzsPy94pyEd4VoRAY5/RBS+PgLCJm4o2qsaTN75jqHpSQXgw8\nCfZT3+0XnYvsYHBt8jtiVUpHJpbfh9vNNjXzcLO/JKCv8NW3So1MfV2A+mT/mDmh\nnCQ8kAI62fw=pLiQ\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2156729 - CVE-2021-4238 goutils: RandomAlphaNumeric and CryptoRandomAlphaNumeric are not as random as they should be\n2163037 - CVE-2022-3064 go-yaml: Improve heuristics preventing CPU/memory abuse by parsing malicious or large YAML documents\n2167819 - CVE-2023-23947 ArgoCD: Users with any cluster secret update access may update out-of-bounds cluster secrets\n\n5. \n\nCVE-2022-40303\n\n    Maddie Stone discovered that missing safety checks in several\n    functions can result in integer overflows when parsing a XML\n    document with the XML_PARSE_HUGE option enabled. \n\nCVE-2022-40304\n\n    Ned Williamson and Nathan Wachholz discovered a vulnerability when\n    handling detection of entity reference cycles, which may result in\n    corrupted dictionary entries. This flaw may lead to logic errors,\n    including memory errors like double free flaws. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.9.10+dfsg-6.7+deb11u3. \n\nWe recommend that you upgrade your libxml2 packages. \n\nFor the detailed security status of libxml2 please refer to its security\ntracker page at:\nhttps://security-tracker.debian.org/tracker/libxml2\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmNmu0JfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0QPYQ/5AQtGMTA78fVUCt9lBuzBbGe5F/2jOHmP2CIlEbV3RnHtZe9wMGlcYvSq\n4hDRvu8OEHmSwqHyRbghQt2oCvSLhtIhqxf0Itro5Pv8PWrhoFy6TdNX9tW+ARqz\nTIF93hiHeTuQO+XqkTct50KUlTB6ccZDREqx7tGK4B/8fHM+34vPca0nRpWHUaXM\naUJtgGNRvO+th3qulMKGGzE2K05678D1RYngF3T/NJ0+aobfdd4dVOobyVqotF76\nthwO04V54oXVNuaRUJ5ItGKcY6sLx0l0cJ+HysB93xS2TGDAZhayFkMRKTk7hfOo\nB08LPMnYjcbb+A1cezT+XpgcTOir+pZXuNSUuB7Q1vwERXGnhdMC0Z8hwTL5o18A\nh4S6fk7vPSIIOzOGkAWYQ0mjsG5c45rXDVCp4u9LwIqIEAq/pXI7UTp+kJFSRerk\nQ8lWBuTiwjlaaFGslANxmbPJtfS4PHFFLPnjJJArhJI0CRi5cq3Htruw/5sVgwTj\nXu+d1ht/jmmNfNS4qTDBpq1wJVmpc82qrxT8uDLp1BK2nBx6tSnsDViENIPBZE3O\nOzmIYYCAuXvhKK4QG9H/02hBBhwSFVI/TnFzFhF8KyA95VXCYpBkUpweU0in4PuK\nl4/gd0lPZGeJDw7VI+p7HG5GeOA/7d+NkvwrHaHe4iShFfHYZxI\\x8etg\n-----END PGP SIGNATURE-----\n. Description:\n\nRed Hat OpenShift Service Mesh is the Red Hat distribution of the Istio\nservice mesh project, tailored for installation into an on-premise\nOpenShift Container Platform installation. \n\nThis advisory covers container images for the release. Bugs fixed (https://bugzilla.redhat.com/):\n\n2161274 - CVE-2022-41717 golang: net/http: An attacker can cause excessive memory growth in a Go server accepting HTTP/2 requests\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOSSM-1330 - Allow specifying secret as pilot server cert when using CertificateAuthority: Custom\nOSSM-2342 - Run OSSM operator on infrastructure nodes\nOSSM-2371 - Kiali in read-only mode still can change the log level of the envoy proxies\nOSSM-2373 - Can\u0027t login to Kiali with \"Error trying to get OAuth metadata\"\nOSSM-2374 - Deleting a SMM also deletes the SMMR in OpenShift Service Mesh\nOSSM-2492 - Default tolerations in SMCP not passed to Jaeger\nOSSM-2493 - Default nodeSelector and tolerations in SMCP not passed to Kiali\nOSSM-3317 - Error: deployment.accessible_namespaces set to [\u0027**\u0027]\n\n6",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      },
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "db": "PACKETSTORM",
        "id": "171025"
      },
      {
        "db": "PACKETSTORM",
        "id": "171016"
      },
      {
        "db": "PACKETSTORM",
        "id": "170753"
      },
      {
        "db": "PACKETSTORM",
        "id": "171144"
      },
      {
        "db": "PACKETSTORM",
        "id": "170555"
      },
      {
        "db": "PACKETSTORM",
        "id": "171042"
      },
      {
        "db": "PACKETSTORM",
        "id": "171040"
      },
      {
        "db": "PACKETSTORM",
        "id": "169732"
      },
      {
        "db": "PACKETSTORM",
        "id": "171470"
      }
    ],
    "trust": 1.8
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-429438",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-40304",
        "trust": 2.6
      },
      {
        "db": "PACKETSTORM",
        "id": "170555",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "169732",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "169824",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169857",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170318",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169620",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170955",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170097",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170754",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0246",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.3732",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1467",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5286",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.3143",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6321",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5792.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0816",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1501",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5614",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1267",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5455",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1041",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1398",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "170753",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "171016",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "171042",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "171040",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "170317",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170316",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171173",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171043",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170752",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170899",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170096",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170312",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169858",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171017",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170315",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171260",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-429438",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171025",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171144",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171470",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "db": "PACKETSTORM",
        "id": "171025"
      },
      {
        "db": "PACKETSTORM",
        "id": "171016"
      },
      {
        "db": "PACKETSTORM",
        "id": "170753"
      },
      {
        "db": "PACKETSTORM",
        "id": "171144"
      },
      {
        "db": "PACKETSTORM",
        "id": "170555"
      },
      {
        "db": "PACKETSTORM",
        "id": "171042"
      },
      {
        "db": "PACKETSTORM",
        "id": "171040"
      },
      {
        "db": "PACKETSTORM",
        "id": "169732"
      },
      {
        "db": "PACKETSTORM",
        "id": "171470"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "id": "VAR-202210-1070",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:47:46.905000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "libxml2 Fixes for code issue vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=215772"
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-415",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-611",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20221209-0003/"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213531"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213533"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213534"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213535"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213536"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/dec/21"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/dec/24"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/dec/25"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/dec/26"
      },
      {
        "trust": 1.7,
        "url": "https://gitlab.gnome.org/gnome/libxml2/-/commit/1b41ec4e9433b05bb0376be4725804c54ef1d80b"
      },
      {
        "trust": 1.7,
        "url": "https://gitlab.gnome.org/gnome/libxml2/-/tags"
      },
      {
        "trust": 1.7,
        "url": "https://gitlab.gnome.org/gnome/libxml2/-/tags/v2.10.3"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2022/dec/27"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-40304"
      },
      {
        "trust": 0.8,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-40303"
      },
      {
        "trust": 0.8,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40303"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40304"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-47629"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1041"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.3143"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170555/red-hat-security-advisory-2023-0173-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1267"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1467"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170318/apple-security-advisory-2022-12-13-8.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1501"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht213505"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5286"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170955/red-hat-security-advisory-2023-0634-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169857/apple-security-advisory-2022-11-09-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170754/red-hat-security-advisory-2023-0468-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170097/ubuntu-security-notice-usn-5760-2.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht213534"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.3732"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0246"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-40304/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169732/debian-security-advisory-5271-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/libxml2-three-vulnerabilities-39554"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169824/libxml2-attribute-parsing-double-free.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1398"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0816"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169620/gentoo-linux-security-advisory-202210-39.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6321"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0513"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5792.2"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5455"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5614"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-35737"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46848"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-46848"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-43680"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-23521"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-41903"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-42012"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-42010"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-42011"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23521"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35737"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2867"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2056"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2953"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2058"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2520"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2057"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2868"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2519"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2058"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2057"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2869"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2056"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2521"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2519"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1304"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-42898"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43680"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42012"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42010"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42011"
      },
      {
        "trust": 0.2,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-41717"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4238"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3064"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23947"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-47629"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-3064"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4238"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-41903"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2023-23947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2964"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2869"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0794"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2520"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2953"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2964"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2868"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-44617"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2867"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-4883"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-4139"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46285"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3715"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-33099"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33099"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-34903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-0813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-34903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2509"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3515"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3715"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3515"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2509"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-22482"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-22482"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/cicd/gitops/understanding-openshift-gitops.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3775"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25308"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25310"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26719"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-37603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-35065"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30293"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-21835"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26700"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42920"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27405"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-21843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-46175"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27406"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36567"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25309"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-37601"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3787"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2601"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27404"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-21830"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36567"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0173"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0804"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0802"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/libxml2"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-45061"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:1448"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28861"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-41717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-40897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28861"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-4415"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-40897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-48303"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2023-23916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4415"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45061"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10735"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "db": "PACKETSTORM",
        "id": "171025"
      },
      {
        "db": "PACKETSTORM",
        "id": "171016"
      },
      {
        "db": "PACKETSTORM",
        "id": "170753"
      },
      {
        "db": "PACKETSTORM",
        "id": "171144"
      },
      {
        "db": "PACKETSTORM",
        "id": "170555"
      },
      {
        "db": "PACKETSTORM",
        "id": "171042"
      },
      {
        "db": "PACKETSTORM",
        "id": "171040"
      },
      {
        "db": "PACKETSTORM",
        "id": "169732"
      },
      {
        "db": "PACKETSTORM",
        "id": "171470"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "db": "PACKETSTORM",
        "id": "171025"
      },
      {
        "db": "PACKETSTORM",
        "id": "171016"
      },
      {
        "db": "PACKETSTORM",
        "id": "170753"
      },
      {
        "db": "PACKETSTORM",
        "id": "171144"
      },
      {
        "db": "PACKETSTORM",
        "id": "170555"
      },
      {
        "db": "PACKETSTORM",
        "id": "171042"
      },
      {
        "db": "PACKETSTORM",
        "id": "171040"
      },
      {
        "db": "PACKETSTORM",
        "id": "169732"
      },
      {
        "db": "PACKETSTORM",
        "id": "171470"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-11-23T00:00:00",
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "date": "2023-02-16T15:44:21",
        "db": "PACKETSTORM",
        "id": "171025"
      },
      {
        "date": "2023-02-16T15:41:43",
        "db": "PACKETSTORM",
        "id": "171016"
      },
      {
        "date": "2023-01-26T15:34:56",
        "db": "PACKETSTORM",
        "id": "170753"
      },
      {
        "date": "2023-02-28T16:03:55",
        "db": "PACKETSTORM",
        "id": "171144"
      },
      {
        "date": "2023-01-17T17:07:25",
        "db": "PACKETSTORM",
        "id": "170555"
      },
      {
        "date": "2023-02-17T16:04:17",
        "db": "PACKETSTORM",
        "id": "171042"
      },
      {
        "date": "2023-02-17T16:01:57",
        "db": "PACKETSTORM",
        "id": "171040"
      },
      {
        "date": "2022-11-07T15:19:42",
        "db": "PACKETSTORM",
        "id": "169732"
      },
      {
        "date": "2023-03-24T16:45:17",
        "db": "PACKETSTORM",
        "id": "171470"
      },
      {
        "date": "2022-10-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      },
      {
        "date": "2022-11-23T18:15:12.167000",
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-02-23T00:00:00",
        "db": "VULHUB",
        "id": "VHN-429438"
      },
      {
        "date": "2023-06-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      },
      {
        "date": "2025-04-28T20:15:19.607000",
        "db": "NVD",
        "id": "CVE-2022-40304"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "libxml2 Code problem vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1022"
      }
    ],
    "trust": 0.6
  }
}

VAR-202010-1327

Vulnerability from variot - Updated: 2025-12-22 22:47

A logic issue was addressed with improved validation. This issue is fixed in iCloud for Windows 7.17, iTunes 12.10.4 for Windows, iCloud for Windows 10.9.2, tvOS 13.3.1, Safari 13.0.5, iOS 13.3.1 and iPadOS 13.3.1. A DOM object context may not have had a unique security origin. WebKitGTK is a set of open source web browser engines jointly developed by companies such as KDE, Apple (Apple), and Google (Google). Security vulnerabilities exist in WebKitGTK versions prior to 2.26.4 and WPE WebKit versions prior to 2.26.4. An attacker could exploit this vulnerability to interact with objects in other domains. Description:

Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-* 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1667409 - CVE-2019-6251 webkitgtk: processing maliciously crafted web content lead to URI spoofing 1709289 - CVE-2019-11070 webkitgtk: HTTP proxy setting deanonymization information disclosure 1719199 - CVE-2019-8506 webkitgtk: malicous web content leads to arbitrary code execution 1719209 - CVE-2019-8524 webkitgtk: malicious web content leads to arbitrary code execution 1719210 - CVE-2019-8535 webkitgtk: malicious crafted web content leads to arbitrary code execution 1719213 - CVE-2019-8536 webkitgtk: malicious crafted web content leads to arbitrary code execution 1719224 - CVE-2019-8544 webkitgtk: malicious crafted web content leads to arbitrary we content 1719231 - CVE-2019-8558 webkitgtk: malicious crafted web content leads to arbitrary code execution 1719235 - CVE-2019-8559 webkitgtk: malicious web content leads to arbitrary code execution 1719237 - CVE-2019-8563 webkitgtk: malicious web content leads to arbitrary code execution 1719238 - CVE-2019-8551 webkitgtk: malicious web content leads to cross site scripting 1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp 1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution 1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1817144 - Rebase WebKitGTK to 2.28 1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content 1876462 - CVE-2020-3885 webkitgtk: Incorrect processing of file URLs 1876463 - CVE-2020-3894 webkitgtk: Race condition allows reading of restricted memory 1876465 - CVE-2020-3895 webkitgtk: Memory corruption triggered by a malicious web content 1876468 - CVE-2020-3897 webkitgtk: Type confusion leading to arbitrary code execution 1876470 - CVE-2020-3899 webkitgtk: Memory consumption issue leading to arbitrary code execution 1876472 - CVE-2020-3900 webkitgtk: Memory corruption triggered by a malicious web content 1876473 - CVE-2020-3901 webkitgtk: Type confusion leading to arbitrary code execution 1876476 - CVE-2020-3902 webkitgtk: Input validation issue leading to cross-site script attack 1876516 - CVE-2020-3862 webkitgtk: Denial of service via incorrect memory handling 1876518 - CVE-2020-3864 webkitgtk: Non-unique security origin for DOM object contexts 1876521 - CVE-2020-3865 webkitgtk: Incorrect security check for a top-level DOM object context 1876522 - CVE-2020-3867 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876523 - CVE-2020-3868 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876536 - CVE-2019-8710 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876537 - CVE-2019-8743 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876540 - CVE-2019-8764 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876542 - CVE-2019-8765 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876543 - CVE-2019-8766 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876545 - CVE-2019-8782 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876548 - CVE-2019-8783 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876549 - CVE-2019-8808 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876550 - CVE-2019-8811 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876552 - CVE-2019-8812 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876553 - CVE-2019-8813 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876554 - CVE-2019-8814 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876555 - CVE-2019-8815 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876556 - CVE-2019-8816 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876590 - CVE-2019-8819 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876591 - CVE-2019-8820 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876592 - CVE-2019-8821 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876593 - CVE-2019-8822 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876594 - CVE-2019-8823 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876607 - CVE-2019-8625 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876608 - CVE-2019-8674 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876609 - CVE-2019-8707 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876610 - CVE-2019-8719 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876611 - CVE-2019-8720 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876612 - CVE-2019-8726 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876613 - CVE-2019-8733 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876614 - CVE-2019-8735 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876615 - CVE-2019-8763 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876616 - CVE-2019-8768 webkitgtk: Browsing history could not be deleted 1876617 - CVE-2019-8769 webkitgtk: Websites could reveal browsing history 1876619 - CVE-2019-8771 webkitgtk: Violation of iframe sandboxing policy 1876626 - CVE-2019-8644 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876628 - CVE-2019-8649 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876629 - CVE-2019-8658 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876630 - CVE-2019-8666 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876631 - CVE-2019-8669 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876632 - CVE-2019-8671 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876634 - CVE-2019-8672 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876643 - CVE-2019-8673 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876644 - CVE-2019-8676 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876645 - CVE-2019-8677 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876646 - CVE-2019-8678 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876647 - CVE-2019-8679 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876648 - CVE-2019-8680 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876650 - CVE-2019-8681 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876651 - CVE-2019-8683 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876652 - CVE-2019-8684 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876653 - CVE-2019-8686 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876655 - CVE-2019-8687 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876656 - CVE-2019-8688 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876657 - CVE-2019-8689 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876664 - CVE-2019-8690 webkitgtk: Incorrect state management leading to universal cross-site scripting 1876880 - CVE-2019-6237 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876881 - CVE-2019-8571 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876882 - CVE-2019-8583 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876883 - CVE-2019-8584 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876884 - CVE-2019-8586 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876887 - CVE-2019-8587 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876891 - CVE-2019-8594 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876892 - CVE-2019-8595 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876893 - CVE-2019-8596 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876894 - CVE-2019-8597 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876895 - CVE-2019-8601 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876897 - CVE-2019-8607 webkitgtk: Out-of-bounds read leading to memory disclosure 1876898 - CVE-2019-8608 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876899 - CVE-2019-8609 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1876900 - CVE-2019-8610 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1877045 - CVE-2019-8615 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1877046 - CVE-2019-8611 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1877047 - CVE-2019-8619 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1877048 - CVE-2019-8622 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution 1877049 - CVE-2019-8623 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/


  1. Gentoo Linux Security Advisory GLSA 202003-22

                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Impact

A remote attacker could execute arbitrary code, cause a Denial of Service condition, bypass intended memory-read restrictions, conduct a timing side-channel attack to bypass the Same Origin Policy or obtain sensitive information.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202010-1327",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.5"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.4"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.9.2"
      },
      {
        "model": "enterprise linux desktop",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "7.0"
      },
      {
        "model": "enterprise linux workstation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "7.0"
      },
      {
        "model": "enterprise linux server",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "redhat",
        "version": "7.0"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "7.17"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.3.1"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2020-3864",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-3864",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 1.1,
            "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "LOCAL",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.2,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 3.9,
            "id": "VHN-181989",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.1,
            "vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2020-3864",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-3864",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202002-847",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-181989",
            "trust": 0.1,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-3864",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A logic issue was addressed with improved validation. This issue is fixed in iCloud for Windows 7.17, iTunes 12.10.4 for Windows, iCloud for Windows 10.9.2, tvOS 13.3.1, Safari 13.0.5, iOS 13.3.1 and iPadOS 13.3.1. A DOM object context may not have had a unique security origin. WebKitGTK is a set of open source web browser engines jointly developed by companies such as KDE, Apple (Apple), and Google (Google). Security vulnerabilities exist in WebKitGTK versions prior to 2.26.4 and WPE WebKit versions prior to 2.26.4. An attacker could exploit this vulnerability to interact with objects in other domains. Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1667409 - CVE-2019-6251 webkitgtk: processing maliciously crafted web content lead to URI spoofing\n1709289 - CVE-2019-11070 webkitgtk: HTTP proxy setting deanonymization information disclosure\n1719199 - CVE-2019-8506 webkitgtk: malicous web content leads to arbitrary code execution\n1719209 - CVE-2019-8524 webkitgtk: malicious web content leads to arbitrary code execution\n1719210 - CVE-2019-8535 webkitgtk: malicious crafted web content leads to arbitrary code execution\n1719213 - CVE-2019-8536 webkitgtk: malicious crafted web content leads to arbitrary code execution\n1719224 - CVE-2019-8544 webkitgtk: malicious crafted web content leads to arbitrary we content\n1719231 - CVE-2019-8558 webkitgtk: malicious crafted web content leads to arbitrary code execution\n1719235 - CVE-2019-8559 webkitgtk: malicious web content leads to arbitrary code execution\n1719237 - CVE-2019-8563 webkitgtk: malicious web content leads to arbitrary code execution\n1719238 - CVE-2019-8551 webkitgtk: malicious web content leads to cross site scripting\n1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp\n1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution\n1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1817144 - Rebase WebKitGTK to 2.28\n1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content\n1876462 - CVE-2020-3885 webkitgtk: Incorrect processing of file URLs\n1876463 - CVE-2020-3894 webkitgtk: Race condition allows reading of restricted memory\n1876465 - CVE-2020-3895 webkitgtk: Memory corruption triggered by a malicious web content\n1876468 - CVE-2020-3897 webkitgtk: Type confusion leading to arbitrary code execution\n1876470 - CVE-2020-3899 webkitgtk: Memory consumption issue leading to arbitrary code execution\n1876472 - CVE-2020-3900 webkitgtk: Memory corruption  triggered by a malicious web content\n1876473 - CVE-2020-3901 webkitgtk: Type confusion leading to arbitrary code execution\n1876476 - CVE-2020-3902 webkitgtk: Input validation issue leading to cross-site script attack\n1876516 - CVE-2020-3862 webkitgtk: Denial of service via incorrect memory handling\n1876518 - CVE-2020-3864 webkitgtk: Non-unique security origin for DOM object contexts\n1876521 - CVE-2020-3865 webkitgtk: Incorrect security check for a top-level DOM object context\n1876522 - CVE-2020-3867 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876523 - CVE-2020-3868 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876536 - CVE-2019-8710 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876537 - CVE-2019-8743 webkitgtk: Multiple memory corruption  issues leading to arbitrary code execution\n1876540 - CVE-2019-8764 webkitgtk: Incorrect state  management leading to universal cross-site scripting\n1876542 - CVE-2019-8765 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876543 - CVE-2019-8766 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876545 - CVE-2019-8782 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876548 - CVE-2019-8783 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876549 - CVE-2019-8808 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876550 - CVE-2019-8811 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876552 - CVE-2019-8812 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876553 - CVE-2019-8813 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876554 - CVE-2019-8814 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876555 - CVE-2019-8815 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876556 - CVE-2019-8816 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876590 - CVE-2019-8819 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876591 - CVE-2019-8820 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876592 - CVE-2019-8821 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876593 - CVE-2019-8822 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876594 - CVE-2019-8823 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876607 - CVE-2019-8625 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876608 - CVE-2019-8674 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876609 - CVE-2019-8707 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876610 - CVE-2019-8719 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876611 - CVE-2019-8720 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876612 - CVE-2019-8726 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876613 - CVE-2019-8733 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876614 - CVE-2019-8735 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876615 - CVE-2019-8763 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876616 - CVE-2019-8768 webkitgtk: Browsing history could not be deleted\n1876617 - CVE-2019-8769 webkitgtk: Websites could reveal browsing history\n1876619 - CVE-2019-8771 webkitgtk: Violation of iframe sandboxing policy\n1876626 - CVE-2019-8644 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876628 - CVE-2019-8649 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876629 - CVE-2019-8658 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876630 - CVE-2019-8666 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876631 - CVE-2019-8669 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876632 - CVE-2019-8671 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876634 - CVE-2019-8672 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876643 - CVE-2019-8673 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876644 - CVE-2019-8676 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876645 - CVE-2019-8677 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876646 - CVE-2019-8678 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876647 - CVE-2019-8679 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876648 - CVE-2019-8680 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876650 - CVE-2019-8681 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876651 - CVE-2019-8683 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876652 - CVE-2019-8684 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876653 - CVE-2019-8686 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876655 - CVE-2019-8687 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876656 - CVE-2019-8688 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876657 - CVE-2019-8689 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876664 - CVE-2019-8690 webkitgtk: Incorrect state management leading to universal cross-site scripting\n1876880 - CVE-2019-6237 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876881 - CVE-2019-8571 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876882 - CVE-2019-8583 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876883 - CVE-2019-8584 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876884 - CVE-2019-8586 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876887 - CVE-2019-8587 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876891 - CVE-2019-8594 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876892 - CVE-2019-8595 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876893 - CVE-2019-8596 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876894 - CVE-2019-8597 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876895 - CVE-2019-8601 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876897 - CVE-2019-8607 webkitgtk: Out-of-bounds read leading to memory disclosure\n1876898 - CVE-2019-8608 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876899 - CVE-2019-8609 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1876900 - CVE-2019-8610 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1877045 - CVE-2019-8615 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1877046 - CVE-2019-8611 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1877047 - CVE-2019-8619 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1877048 - CVE-2019-8622 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n1877049 - CVE-2019-8623 webkitgtk: Multiple memory corruption issues leading to arbitrary code execution\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nImpact\n======\n\nA remote attacker could execute arbitrary code, cause a Denial of\nService condition, bypass intended memory-read restrictions, conduct a\ntiming side-channel attack to bypass the Same Origin Policy or obtain\nsensitive information. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      },
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      }
    ],
    "trust": 1.8
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-3864",
        "trust": 2.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0538",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0567",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.0677",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156398",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156392",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-181989",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3864",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "id": "VAR-202010-1327",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:47:24.825000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "WebKitGTK  and WPE WebKit Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=110732"
      },
      {
        "title": "Ubuntu Security Notice: webkit2gtk vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4281-1"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2020-3864 log"
      },
      {
        "title": "Debian Security Advisories: DSA-4627-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=78d59627ce9fc1018e3643bc1e16ec00"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202002-10] webkit2gtk: multiple issues",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202002-10"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "CVE-2020-3864",
        "trust": 0.1,
        "url": "https://github.com/JamesGeee/CVE-2020-3864 "
      },
      {
        "title": "Threatpost",
        "trust": 0.1,
        "url": "https://threatpost.com/apple-safari-flaws-webcam-access/154476/"
      },
      {
        "title": "BleepingComputer",
        "trust": 0.1,
        "url": "https://www.bleepingcomputer.com/news/security/apple-paid-75k-for-bugs-letting-sites-hijack-iphone-cameras/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-346",
        "trust": 1.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht210918"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht210920"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht210922"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht210923"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht210947"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht210948"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0538/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156392/webkitgtk-wpe-webkit-dos-logic-issue-code-execution.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156398/debian-security-advisory-4627-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-five-vulnerabilities-31619"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0567/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.0677/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.4,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/346.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/jamesgeee/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4281-1/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-10-27T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "date": "2020-10-27T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "date": "2022-08-09T14:36:05",
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2020-02-17T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      },
      {
        "date": "2020-10-27T21:15:15.167000",
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-05-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-181989"
      },
      {
        "date": "2021-05-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3864"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      },
      {
        "date": "2024-11-21T05:31:51.653000",
        "db": "NVD",
        "id": "CVE-2020-3864"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "WebKitGTK and WebKit Access control error vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "access control error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202002-847"
      }
    ],
    "trust": 0.6
  }
}

VAR-201401-0579

Vulnerability from variot - Updated: 2025-12-22 22:47

expat before version 2.4.0 does not properly handle entities expansion unless an application developer uses the XML_SetEntityDeclHandler function, which allows remote attackers to cause a denial of service (resource consumption), send HTTP requests to intranet servers, or read arbitrary files via a crafted XML document, aka an XML External Entity (XXE) issue. NOTE: it could be argued that because expat already provides the ability to disable external entity expansion, the responsibility for resolving this issue lies with application developers; according to this argument, this entry should be REJECTed, and each affected application would need its own CVE. Expat is prone to multiple denial-of-service vulnerabilities. Successful exploits will allow attackers to consume large amounts of memory and cause a crash through specially crafted XML containing malicious attributes. Expat 2.1.0 and prior versions are vulnerable. Expat is a C language-based XML parser library developed by American software developer Jim Clark, which uses a stream-oriented parser.


Gentoo Linux Security Advisory GLSA 201701-21


                                       https://security.gentoo.org/

Severity: Normal Title: Expat: Multiple vulnerabilities Date: January 11, 2017 Bugs: #458742, #555642, #577928, #583268, #585510 ID: 201701-21


Synopsis

Multiple vulnerabilities have been found in Expat, the worst of which may allow execution of arbitrary code.

Background

Expat is a set of XML parsing libraries.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 dev-libs/expat < 2.2.0-r1 >= 2.2.0-r1

Description

Multiple vulnerabilities have been discovered in Expat. Please review the CVE identifiers referenced below for details. This attack could also be used against automated systems that arbitrarily process XML files.

Workaround

There is no known workaround at this time.

Resolution

All Expat users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/expat-2.2.0-r1"

References

[ 1 ] CVE-2012-6702 http://nvd.nist.gov/nvd.cfm?cvename=CVE-2012-6702 [ 2 ] CVE-2013-0340 http://nvd.nist.gov/nvd.cfm?cvename=CVE-2013-0340 [ 3 ] CVE-2015-1283 http://nvd.nist.gov/nvd.cfm?cvename=CVE-2015-1283 [ 4 ] CVE-2016-0718 http://nvd.nist.gov/nvd.cfm?cvename=CVE-2016-0718 [ 5 ] CVE-2016-4472 http://nvd.nist.gov/nvd.cfm?cvename=CVE-2016-4472 [ 6 ] CVE-2016-5300 http://nvd.nist.gov/nvd.cfm?cvename=CVE-2016-5300

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/201701-21

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2017 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

http://creativecommons.org/licenses/by-sa/2.5

.

Alternatively, on your watch, select "My Watch > General > About". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2021-10-26-9 Additional information for APPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15

iOS 15 and iPadOS 15 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212814.

Accessory Manager Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory consumption issue was addressed with improved memory handling. CVE-2021-30837: Siddharth Aeri (@b1n4r1b01)

AppleMobileFileIntegrity Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to read sensitive information Description: This issue was addressed with improved checks. CVE-2021-30811: an anonymous researcher working with Compartir

Apple Neural Engine Available for devices with Apple Neural Engine: iPhone 8 and later, iPad Pro (3rd generation) and later, iPad Air (3rd generation) and later, and iPad mini (5th generation) Impact: A malicious application may be able to execute arbitrary code with system privileges on devices with an Apple Neural Engine Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30838: proteas wang

bootp Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A device may be passively tracked by its WiFi MAC address Description: A user privacy issue was addressed by removing the broadcast MAC address. CVE-2021-30866: Fabien Duchêne of UCLouvain (Belgium) Entry added October 25, 2021

CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a malicious audio file may result in unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30834: JunDong Xie of Ant Security Light-Year Lab Entry added October 25, 2021

CoreML Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30825: hjy79425575 working with Trend Micro Zero Day Initiative

Face ID Available for devices with Face ID: iPhone X, iPhone XR, iPhone XS (all models), iPhone 11 (all models), iPhone 12 (all models), iPad Pro (11-inch), and iPad Pro (3rd generation) Impact: A 3D model constructed to look like the enrolled user may be able to authenticate via Face ID Description: This issue was addressed by improving Face ID anti- spoofing models. CVE-2021-30863: Wish Wu (吴潍浠 @wish_wu) of Ant-financial Light-Year Security Lab

FaceTime Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker with physical access to a device may be able to see private contact information Description: The issue was addressed with improved permissions logic. CVE-2021-30816: Atharv (@atharv0x0) Entry added October 25, 2021

FaceTime Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application with microphone permission may unexpectedly access microphone input during a FaceTime call Description: A logic issue was addressed with improved validation. CVE-2021-30882: Adam Bellard and Spencer Reitman of Airtime Entry added October 25, 2021

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font may result in the disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30831: Xingwei Lin of Ant Security Light-Year Lab Entry added October 25, 2021

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30840: Xingwei Lin of Ant Security Light-Year Lab Entry added October 25, 2021

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab

Foundation Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved memory handling. CVE-2021-30852: Yinyi Wu (@3ndy1) of Ant Security Light-Year Lab Entry added October 25, 2021

iCloud Photo Library Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to access photo metadata without needing permission to access photos Description: The issue was addressed with improved authentication. CVE-2021-30867: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 25, 2021

ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2021-30814: hjy79425575 Entry added October 25, 2021

ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30835: Ye Zhang of Baidu Security CVE-2021-30847: Mike Zhang of Pangu Lab

Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2021-30857: Zweig of Kunlun Lab

libexpat Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed by updating expat to version 2.4.1. CVE-2013-0340: an anonymous researcher

Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30819: Apple

NetworkExtension Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A VPN configuration may be installed by an app without user permission Description: An authorization issue was addressed with improved state management. CVE-2021-30874: Javier Vieira Boccardo (linkedin.com/javier-vieira- boccardo) Entry added October 25, 2021

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to access restricted files Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved state management. CVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Quick Look Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Previewing an html file attached to a note may unexpectedly contact remote servers Description: A logic issue existed in the handling of document loads. This issue was addressed with improved state management. CVE-2021-30870: Saif Hamed Al Hinai Oman CERT Entry added October 25, 2021

Sandbox Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to modify protected parts of the file system Description: This issue was addressed with improved checks. CVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 25, 2021

Siri Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to view contacts from the lock screen Description: A lock screen issue allowed access to contacts on a locked device. This issue was addressed with improved state management. CVE-2021-30815: an anonymous researcher

Telephony Available for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad Air 2, iPad (5th generation), and iPad mini 4 Impact: In certain situations, the baseband would fail to enable integrity and ciphering protection Description: A logic issue was addressed with improved state management. CVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST SysSec Lab

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Visiting a maliciously crafted website may reveal a user's browsing history Description: The issue was resolved with additional restrictions on CSS compositing. CVE-2021-30884: an anonymous researcher Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted audio file may disclose restricted memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30836: Peter Nguyen Vu Hoang of STAR Labs Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30809: an anonymous researcher Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30846: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30848: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30849: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption vulnerability was addressed with improved locking. CVE-2021-30851: Samuel Groß of Google Project Zero

Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker in physical proximity may be able to force a user onto a malicious Wi-Fi network during device setup Description: An authorization issue was addressed with improved state management. CVE-2021-30810: an anonymous researcher

Additional recognition

Assets We would like to acknowledge Cees Elzinga for their assistance.

Bluetooth We would like to acknowledge an anonymous researcher for their assistance.

File System We would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their assistance.

Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.

UIKit We would like to acknowledge an anonymous researcher for their assistance.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "15"

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmF4hy0ACgkQeC9qKD1p rhiHNRAAwUaVHgd+whk6qGBZ3PYqSbvvuuo00rLW6JIqv9dwpEh9BBD//bSsUppb 41J5VaNoKDsonTLhXt0Mhn66wmhbGjLneMIoNb7ffl7O2xDQaWAr+HmoUm6wOo48 Kqj/wJGNJJov4ucBA6InpUz1ZevEhaPU4QMNedVck4YSl1GhtSTJsBAzVkMakQhX uJ1fVdOJ5konmmQJLYxDUo60xqS0sZPchkwCM1zwR/SAZ70pt6P0MGI1Yddjcn1U loAcKYVgkKAc9RWkXRskR1RxFBGivTI/gy5pDkLxfGfwFecf6PSR7MDki4xDeoVH 5FWXBwga8Uc/afGRqnFwTpdsisRZP8rQFwMam1T/DwgrWD8R2CCn/wOcvbtlWMIv LczYCJFMELaXOjFF5duXaUJme97567OypYvhjBDtiIPg5MCGhZZCmpbRjkcUBZNJ YQOELzq6CHWc96mjPOt34B0X2VXGhvgpQ0/evvcQe3bHv0F7N/acAlgsGe+e4Jn8 k0gWZocq+fPnl6YYgZKIGgcZWUl5bdqduApesEtpRU2ug2TE+xMOhMZXb1WLawJl n/OtVHhIjft23r0MGgyWTIHMPe5DRvEPWGI3DS+55JX6XOxSGp9o6xgOAraZR4U6 HO/WbQOwj7SSKbyPxmDTp4OMyFPukbe92WIMh5EpFcILp6GTJqQ= =lg51 -----END PGP SIGNATURE-----

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201401-0579",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "python",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.7.0"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.6"
      },
      {
        "model": "python",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.6.0"
      },
      {
        "model": "python",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.8.12"
      },
      {
        "model": "python",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.6.15"
      },
      {
        "model": "python",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.9.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.8"
      },
      {
        "model": "python",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.7.12"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.0"
      },
      {
        "model": "libexpat",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "libexpat",
        "version": "2.4.0"
      },
      {
        "model": "python",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.9.7"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.8"
      },
      {
        "model": "python",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "python",
        "version": "3.8.0"
      },
      {
        "model": "expat",
        "scope": "lte",
        "trust": 0.8,
        "vendor": "expat",
        "version": "2.1.0"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.4"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "2.1.0"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.8"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "2.0.1"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.1"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.5"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.6"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.2"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "1.95.7"
      },
      {
        "model": "expat",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "libexpat",
        "version": "2.0.0"
      },
      {
        "model": "clark expat",
        "scope": "eq",
        "trust": 0.3,
        "vendor": "james",
        "version": "2.1"
      },
      {
        "model": "clark expat",
        "scope": "eq",
        "trust": 0.3,
        "vendor": "james",
        "version": "2.0.1"
      }
    ],
    "sources": [
      {
        "db": "BID",
        "id": "58233"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:libexpat:expat",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      }
    ],
    "trust": 0.5
  },
  "cve": "CVE-2013-0340",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2013-0340",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-60342",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2013-0340",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2013-0340",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201303-096",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-60342",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2013-0340",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "expat before version 2.4.0 does not properly handle entities expansion unless an application developer uses the XML_SetEntityDeclHandler function, which allows remote attackers to cause a denial of service (resource consumption), send HTTP requests to intranet servers, or read arbitrary files via a crafted XML document, aka an XML External Entity (XXE) issue.  NOTE: it could be argued that because expat already provides the ability to disable external entity expansion, the responsibility for resolving this issue lies with application developers; according to this argument, this entry should be REJECTed, and each affected application would need its own CVE. Expat is prone to multiple denial-of-service vulnerabilities. \nSuccessful exploits will allow attackers to consume large amounts of memory and cause a crash through specially crafted XML containing malicious attributes. \nExpat 2.1.0 and prior versions are vulnerable. Expat is a C language-based XML parser library developed by American software developer Jim Clark, which uses a stream-oriented parser. \n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 201701-21\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: Expat: Multiple vulnerabilities\n     Date: January 11, 2017\n     Bugs: #458742, #555642, #577928, #583268, #585510\n       ID: 201701-21\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in Expat, the worst of which\nmay allow execution of arbitrary code. \n\nBackground\n==========\n\nExpat is a set of XML parsing libraries. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  dev-libs/expat              \u003c 2.2.0-r1               \u003e= 2.2.0-r1\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in Expat. Please review\nthe CVE identifiers referenced below for details.  This attack could also\nbe used against automated systems that arbitrarily process XML files. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll Expat users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=dev-libs/expat-2.2.0-r1\"\n\nReferences\n==========\n\n[ 1 ] CVE-2012-6702\n      http://nvd.nist.gov/nvd.cfm?cvename=CVE-2012-6702\n[ 2 ] CVE-2013-0340\n      http://nvd.nist.gov/nvd.cfm?cvename=CVE-2013-0340\n[ 3 ] CVE-2015-1283\n      http://nvd.nist.gov/nvd.cfm?cvename=CVE-2015-1283\n[ 4 ] CVE-2016-0718\n      http://nvd.nist.gov/nvd.cfm?cvename=CVE-2016-0718\n[ 5 ] CVE-2016-4472\n      http://nvd.nist.gov/nvd.cfm?cvename=CVE-2016-4472\n[ 6 ] CVE-2016-5300\n      http://nvd.nist.gov/nvd.cfm?cvename=CVE-2016-5300\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/201701-21\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2017 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttp://creativecommons.org/licenses/by-sa/2.5\n\n\n. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-10-26-9 Additional information for\nAPPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15\n\niOS 15 and iPadOS 15 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212814. \n\nAccessory Manager\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2021-30837: Siddharth Aeri (@b1n4r1b01)\n\nAppleMobileFileIntegrity\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to read sensitive information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30811: an anonymous researcher working with Compartir\n\nApple Neural Engine\nAvailable for devices with Apple Neural Engine: iPhone 8 and later,\niPad Pro (3rd generation) and later, iPad Air (3rd generation) and\nlater, and iPad mini (5th generation) \nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges on devices with an Apple Neural Engine\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30838: proteas wang\n\nbootp\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A device may be passively tracked by its WiFi MAC address\nDescription: A user privacy issue was addressed by removing the\nbroadcast MAC address. \nCVE-2021-30866: Fabien Duch\u00eane of UCLouvain (Belgium)\nEntry added October 25, 2021\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a malicious audio file may result in unexpected\napplication termination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30834: JunDong Xie of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\nCoreML\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30825: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nFace ID\nAvailable for devices with Face ID: iPhone X, iPhone XR, iPhone XS\n(all models), iPhone 11 (all models), iPhone 12 (all models), iPad\nPro (11-inch), and iPad Pro (3rd generation)\nImpact: A 3D model constructed to look like the enrolled user may be\nable to authenticate via Face ID\nDescription: This issue was addressed by improving Face ID anti-\nspoofing models. \nCVE-2021-30863: Wish Wu (\u5434\u6f4d\u6d60 @wish_wu) of Ant-financial Light-Year\nSecurity Lab\n\nFaceTime\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker with physical access to a device may be able to\nsee private contact information\nDescription: The issue was addressed with improved permissions logic. \nCVE-2021-30816: Atharv (@atharv0x0)\nEntry added October 25, 2021\n\nFaceTime\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application with microphone permission may unexpectedly\naccess microphone input during a FaceTime call\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30882: Adam Bellard and Spencer Reitman of Airtime\nEntry added October 25, 2021\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font may result in the\ndisclosure of process memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30831: Xingwei Lin of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30840: Xingwei Lin of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab\n\nFoundation\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved\nmemory handling. \nCVE-2021-30852: Yinyi Wu (@3ndy1) of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\niCloud Photo Library\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to access photo metadata\nwithout needing permission to access photos\nDescription: The issue was addressed with improved authentication. \nCVE-2021-30867: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 25, 2021\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2021-30814: hjy79425575\nEntry added October 25, 2021\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30835: Ye Zhang of Baidu Security\nCVE-2021-30847: Mike Zhang of Pangu Lab\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2021-30857: Zweig of Kunlun Lab\n\nlibexpat\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed by updating expat to version\n2.4.1. \nCVE-2013-0340: an anonymous researcher\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30819: Apple\n\nNetworkExtension\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A VPN configuration may be installed by an app without user\npermission\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2021-30874: Javier Vieira Boccardo (linkedin.com/javier-vieira-\nboccardo)\nEntry added October 25, 2021\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to access restricted files\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nQuick Look\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Previewing an html file attached to a note may unexpectedly\ncontact remote servers\nDescription: A logic issue existed in the handling of document loads. \nThis issue was addressed with improved state management. \nCVE-2021-30870: Saif Hamed Al Hinai Oman CERT\nEntry added October 25, 2021\n\nSandbox\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: This issue was addressed with improved checks. \nCVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 25, 2021\n\nSiri\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to view contacts from the lock\nscreen\nDescription: A lock screen issue allowed access to contacts on a\nlocked device. This issue was addressed with improved state\nmanagement. \nCVE-2021-30815: an anonymous researcher\n\nTelephony\nAvailable for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad\nAir 2, iPad (5th generation), and iPad mini 4\nImpact: In certain situations, the baseband would fail to enable\nintegrity and ciphering protection\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST\nSysSec Lab\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Visiting a maliciously crafted website may reveal a user\u0027s\nbrowsing history\nDescription: The issue was resolved with additional restrictions on\nCSS compositing. \nCVE-2021-30884: an anonymous researcher\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted audio file may disclose\nrestricted memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30836: Peter Nguyen Vu Hoang of STAR Labs\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30809: an anonymous researcher\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30846: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30848: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30849: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption vulnerability was addressed with\nimproved locking. \nCVE-2021-30851: Samuel Gro\u00df of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker in physical proximity may be able to force a user\nonto a malicious Wi-Fi network during device setup\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2021-30810: an anonymous researcher\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nFile System\nWe would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"15\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmF4hy0ACgkQeC9qKD1p\nrhiHNRAAwUaVHgd+whk6qGBZ3PYqSbvvuuo00rLW6JIqv9dwpEh9BBD//bSsUppb\n41J5VaNoKDsonTLhXt0Mhn66wmhbGjLneMIoNb7ffl7O2xDQaWAr+HmoUm6wOo48\nKqj/wJGNJJov4ucBA6InpUz1ZevEhaPU4QMNedVck4YSl1GhtSTJsBAzVkMakQhX\nuJ1fVdOJ5konmmQJLYxDUo60xqS0sZPchkwCM1zwR/SAZ70pt6P0MGI1Yddjcn1U\nloAcKYVgkKAc9RWkXRskR1RxFBGivTI/gy5pDkLxfGfwFecf6PSR7MDki4xDeoVH\n5FWXBwga8Uc/afGRqnFwTpdsisRZP8rQFwMam1T/DwgrWD8R2CCn/wOcvbtlWMIv\nLczYCJFMELaXOjFF5duXaUJme97567OypYvhjBDtiIPg5MCGhZZCmpbRjkcUBZNJ\nYQOELzq6CHWc96mjPOt34B0X2VXGhvgpQ0/evvcQe3bHv0F7N/acAlgsGe+e4Jn8\nk0gWZocq+fPnl6YYgZKIGgcZWUl5bdqduApesEtpRU2ug2TE+xMOhMZXb1WLawJl\nn/OtVHhIjft23r0MGgyWTIHMPe5DRvEPWGI3DS+55JX6XOxSGp9o6xgOAraZR4U6\nHO/WbQOwj7SSKbyPxmDTp4OMyFPukbe92WIMh5EpFcILp6GTJqQ=\n=lg51\n-----END PGP SIGNATURE-----\n\n\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "BID",
        "id": "58233"
      },
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "140431"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      }
    ],
    "trust": 2.61
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2013-0340",
        "trust": 3.5
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2013/04/12/6",
        "trust": 2.6
      },
      {
        "db": "OSVDB",
        "id": "90634",
        "trust": 2.6
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2013/02/22/3",
        "trust": 2.1
      },
      {
        "db": "BID",
        "id": "58233",
        "trust": 2.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/10/07/4",
        "trust": 1.8
      },
      {
        "db": "SECTRACK",
        "id": "1028213",
        "trust": 1.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164692",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3155",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.2136",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6369.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3578",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5875",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164249",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021092024",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021052301",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164689",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164693",
        "trust": 0.2
      },
      {
        "db": "VULHUB",
        "id": "VHN-60342",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2013-0340",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164233",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "140431",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164234",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "db": "BID",
        "id": "58233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "140431"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "id": "VAR-201401-0579",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:47:23.380000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Top Page",
        "trust": 0.8,
        "url": "http://expat.sourceforge.net/"
      },
      {
        "title": "Debian CVElist Bug Report Logs: expat: CVE-2013-0340",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=ed0a8ab828c24c20ec91625d054dc98d"
      },
      {
        "title": "IBM: Security Bulletin:  IBM HTTP Server is vulnerable to  denial of service due to libexpat  (CVE-2022-43680, CVE-2013-0340, CVE-2017-9233)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3f59486ef7ccf0e951141215c837feab"
      },
      {
        "title": "IBM: IBM Security Bulletin: IBM Notes 9 and Domino 9 are affected by Open Source James Clark Expat Vulnerabilities (CVE-2013-0340, CVE-2013-0341)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=1027f59d4cbfc61c314d392910ac817e"
      },
      {
        "title": "IBM: Security Bulletin:  IBM HTTP Server is vulnerable to  denial of service due to libexpat  (CVE-2022-43680, CVE-2013-0340, CVE-2017-9233)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=6567dd4ebc135fb0a5163d77870109bf"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
      },
      {
        "title": "gost",
        "trust": 0.1,
        "url": "https://github.com/vulsio/gost "
      },
      {
        "title": "gost",
        "trust": 0.1,
        "url": "https://github.com/knqyf263/gost "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-611",
        "trust": 1.1
      },
      {
        "problemtype": "CWE-264",
        "trust": 0.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.6,
        "url": "http://www.osvdb.org/90634"
      },
      {
        "trust": 2.6,
        "url": "http://www.openwall.com/lists/oss-security/2013/04/12/6"
      },
      {
        "trust": 2.5,
        "url": "http://www.securityfocus.com/bid/58233"
      },
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/201701-21"
      },
      {
        "trust": 1.8,
        "url": "http://securitytracker.com/id?1028213"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/sep/33"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/sep/34"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/sep/35"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/sep/38"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/sep/39"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/sep/40"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/oct/62"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/oct/63"
      },
      {
        "trust": 1.8,
        "url": "http://seclists.org/fulldisclosure/2021/oct/61"
      },
      {
        "trust": 1.8,
        "url": "https://lists.apache.org/thread.html/r41eca5f4f09e74436cbb05dec450fc2bef37b5d3e966aa7cc5fada6d%40%3cannounce.apache.org%3e"
      },
      {
        "trust": 1.8,
        "url": "https://lists.apache.org/thread.html/rfb2c193360436e230b85547e85a41bea0916916f96c501f5b6fc4702%40%3cusers.openoffice.apache.org%3e"
      },
      {
        "trust": 1.8,
        "url": "http://openwall.com/lists/oss-security/2013/02/22/3"
      },
      {
        "trust": 1.8,
        "url": "http://www.openwall.com/lists/oss-security/2021/10/07/4"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212804"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212805"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212807"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212814"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212815"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212819"
      },
      {
        "trust": 1.0,
        "url": "https://github.com/libexpat/libexpat/blob/r_2_4_1/expat/changes"
      },
      {
        "trust": 0.8,
        "url": "http://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2013-0340"
      },
      {
        "trust": 0.8,
        "url": "http://web.nvd.nist.gov/view/vuln/detail?vulnid=cve-2013-0340"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2013-0340"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/rfb2c193360436e230b85547e85a41bea0916916f96c501f5b6fc4702@%3cusers.openoffice.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r41eca5f4f09e74436cbb05dec450fc2bef37b5d3e966aa7cc5fada6d@%3cannounce.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "http://www.ibm.com/support/docview.wss?uid=swg22010778"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021052301"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3155"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6369.2"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212815"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164249/apple-security-advisory-2021-09-20-8.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3578"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.2136/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164692/apple-security-advisory-2021-10-26-10.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5875"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021092024"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30841"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30835"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30810"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30843"
      },
      {
        "trust": 0.5,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30847"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30842"
      },
      {
        "trust": 0.5,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30837"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30857"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30811"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30854"
      },
      {
        "trust": 0.3,
        "url": "http://www.openwall.com/lists/oss-security/2013/02/22/3"
      },
      {
        "trust": 0.3,
        "url": "http://www.libexpat.org/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30855"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30808"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30834"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30831"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30814"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30840"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30815"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212814."
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30838"
      },
      {
        "trust": 0.2,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30825"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30826"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30819"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30852"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30866"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212819."
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/611.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/vulsio/gost"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/knqyf263/gost"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1001864"
      },
      {
        "trust": 0.1,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-http-server-is-vulnerable-to-denial-of-service-due-to-libexpat-cve-2022-43680-cve-2013-0340-cve-2017-9233/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30863"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2012-6702"
      },
      {
        "trust": 0.1,
        "url": "http://nvd.nist.gov/nvd.cfm?cvename=cve-2013-0340"
      },
      {
        "trust": 0.1,
        "url": "http://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-5300"
      },
      {
        "trust": 0.1,
        "url": "http://nvd.nist.gov/nvd.cfm?cvename=cve-2012-6702"
      },
      {
        "trust": 0.1,
        "url": "http://nvd.nist.gov/nvd.cfm?cvename=cve-2016-5300"
      },
      {
        "trust": 0.1,
        "url": "http://nvd.nist.gov/nvd.cfm?cvename=cve-2015-1283"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-0718"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4472"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2015-1283"
      },
      {
        "trust": 0.1,
        "url": "http://nvd.nist.gov/nvd.cfm?cvename=cve-2016-0718"
      },
      {
        "trust": 0.1,
        "url": "http://nvd.nist.gov/nvd.cfm?cvename=cve-2016-4472"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212815."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30850"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30816"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "db": "BID",
        "id": "58233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "140431"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "db": "BID",
        "id": "58233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "140431"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2014-01-21T00:00:00",
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "date": "2014-01-21T00:00:00",
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "date": "2013-02-21T00:00:00",
        "db": "BID",
        "id": "58233"
      },
      {
        "date": "2021-09-22T16:22:10",
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "date": "2017-01-11T18:55:11",
        "db": "PACKETSTORM",
        "id": "140431"
      },
      {
        "date": "2021-10-28T14:58:57",
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "date": "2021-10-28T14:58:43",
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "date": "2021-10-28T14:55:28",
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "date": "2021-09-22T16:22:32",
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "date": "2013-02-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "date": "2014-01-23T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "date": "2014-01-21T18:55:09.117000",
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-02-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-60342"
      },
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2013-0340"
      },
      {
        "date": "2013-02-21T00:00:00",
        "db": "BID",
        "id": "58233"
      },
      {
        "date": "2023-04-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      },
      {
        "date": "2014-01-23T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      },
      {
        "date": "2025-11-25T17:15:47.723000",
        "db": "NVD",
        "id": "CVE-2013-0340"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Expat Service disruption in  (DoS) Vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2013-005874"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201303-096"
      }
    ],
    "trust": 0.6
  }
}

VAR-202102-1488

Vulnerability from variot - Updated: 2025-12-22 22:38

The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). There is no information about this vulnerability at present. Please keep an eye on CNNVD or manufacturer announcements. Bugs fixed (https://bugzilla.redhat.com/):

1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable

  1. 8) - noarch

  2. Description:

EDK (Embedded Development Kit) is a project to enable UEFI support for Virtual Machines. This package contains a sample 64-bit UEFI firmware for QEMU and KVM.

The following packages have been upgraded to a later upstream version: edk2 (20210527gite1999b264f1f).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: openssl security update Advisory ID: RHSA-2021:3798-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:3798 Issue date: 2021-10-12 CVE Names: CVE-2021-23840 CVE-2021-23841 =====================================================================

  1. Summary:

An update for openssl is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.

Security Fix(es):

  • openssl: integer overflow in CipherUpdate (CVE-2021-23840)

  • openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux ComputeNode (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

ppc64: openssl-1.0.2k-22.el7_9.ppc64.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-devel-1.0.2k-22.el7_9.ppc.rpm openssl-devel-1.0.2k-22.el7_9.ppc64.rpm openssl-libs-1.0.2k-22.el7_9.ppc.rpm openssl-libs-1.0.2k-22.el7_9.ppc64.rpm

ppc64le: openssl-1.0.2k-22.el7_9.ppc64le.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-devel-1.0.2k-22.el7_9.ppc64le.rpm openssl-libs-1.0.2k-22.el7_9.ppc64le.rpm

s390x: openssl-1.0.2k-22.el7_9.s390x.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-devel-1.0.2k-22.el7_9.s390.rpm openssl-devel-1.0.2k-22.el7_9.s390x.rpm openssl-libs-1.0.2k-22.el7_9.s390.rpm openssl-libs-1.0.2k-22.el7_9.s390x.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-perl-1.0.2k-22.el7_9.ppc64.rpm openssl-static-1.0.2k-22.el7_9.ppc.rpm openssl-static-1.0.2k-22.el7_9.ppc64.rpm

ppc64le: openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-perl-1.0.2k-22.el7_9.ppc64le.rpm openssl-static-1.0.2k-22.el7_9.ppc64le.rpm

s390x: openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-perl-1.0.2k-22.el7_9.s390x.rpm openssl-static-1.0.2k-22.el7_9.s390.rpm openssl-static-1.0.2k-22.el7_9.s390x.rpm

x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 7):

x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYWWqjtzjgjWX9erEAQj4lg/+IFxqmMQqLSvyz8cKUAPgss/+/wFMpRgh ZZxYBQQ0cBFfWFlROVLaRdeiGcZYkyJCRDqy2Yb8YO1A4PnSOc+htLFYmSmU2kcm QLHinOzGEZo/44vN7Qsl4WhJkJIdlysCwKpkkOCUprMEnhlWMvja2eSSG9JLH16d RqGe4AsJQLKSKLgmhejCOqxb9am+t9zBW0zaZHP4UR52Ju1rG5rLjBJ85Gcrmp2B vp/GVEQ/Asid4MZA2WTx+s6wj5Dt7JOdLWrUbcYAC0I8oPWbAoZJTfPkM7S6Xv+U 68iruVFTh74IkCbQ+SNLoYjiDAVJqtAVRVBha7Fd3/gWR6aJLLaqluLRGvd0mwXY pohCS0ynuMQ9wtYOJ3ezSVcBN+/d9Hs/3s8RWQTzrNG6jtBe57H9/tNkeSVFSVvu PMKXsUoOrIUE2HCflJytDB9wkQmsWxiZoH/xVlrtD0D11egZ4EWjJL6x+xtCTAkT u67CAwsCKxxCeNmz42uBtXSwFXoUapJnsviGzAx247T2pyuXlYMYHlsOy7CtBvIk jEEosCMM72UyXO4XsYTXc0jM3ze6iQTcF9irwhy+X+rTB4IXBubdUEoT0jnKlwfI BQvoPEBlcG+f0VU8BL+FCOosvM0ZqC7KGGOwJLoG1Vqz8rbtmhpcmNAOvzUiHdm3 T4OjSl1NzQQ= =Taj2 -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat Advanced Cluster Management for Kubernetes 2.2.10 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments.

Clusters and applications are all visible and managed from a single console — with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/

Security fixes:

  • CVE-2021-3795 semver-regex: inefficient regular expression complexity

  • CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747

Related bugs:

  • RHACM 2.2.10 images (Bugzilla #2013652)

  • Bugs fixed (https://bugzilla.redhat.com/):

2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747 2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity 2013652 - RHACM 2.2.10 images

  1. ========================================================================== Ubuntu Security Notice USN-4745-1 February 23, 2021

openssl vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 14.04 ESM
  • Ubuntu 12.04 ESM

Summary:

Several security issues were fixed in OpenSSL. (CVE-2021-23841)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 14.04 ESM: libssl1.0.0 1.0.1f-1ubuntu2.27+esm2

Ubuntu 12.04 ESM: libssl1.0.0 1.0.1-4ubuntu5.45

After a standard system update you need to reboot your computer to make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):

1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1168 - Disable hostname verification in syslog TLS settings LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd LOG-1375 - ssl_ca_cert should be optional LOG-1378 - CLO should support sasl_plaintext(Password over http) LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate LOG-1494 - Syslog output is serializing json incorrectly LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing LOG-1735 - Regression introducing flush_at_shutdown LOG-1774 - The collector logs should be excluded in fluent.conf LOG-1776 - fluentd total_limit_size sets value beyond available space LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL LOG-1903 - Fix the Display of ClusterLogging type in OLM LOG-1911 - CLF API changes to Opt-in to multiline error detection LOG-1918 - Alert FluentdNodeDown always firing LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding

  1. Bugs fixed (https://bugzilla.redhat.com/):

1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option

  1. Description:

This release adds the new Apache HTTP Server 2.4.37 Service Pack 10 packages that are part of the JBoss Core Services offering.

This release serves as a replacement for Red Hat JBoss Core Services Apache HTTP Server 2.4.37 Service Pack 9 and includes bug fixes and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1488",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.9.0.0.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1j"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.3.1.2"
      },
      {
        "model": "mysql server",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.23"
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.12.1"
      },
      {
        "model": "essbase",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.2"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "mysql enterprise monitor",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.23"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.0.0.2"
      },
      {
        "model": "jd edwards world security",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "a9.4"
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.11.0"
      },
      {
        "model": "tenable.sc",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.13.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.57"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "snapcenter",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.13.0"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.5.0.0.0"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.1.1"
      },
      {
        "model": "oncommand insight",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.11.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.6"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "oncommand workflow automation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2"
      },
      {
        "model": "mysql server",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.15"
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.12.0"
      },
      {
        "model": "zfs storage appliance kit",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.8"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.2.1.4.0"
      },
      {
        "model": "mysql server",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.7.33"
      },
      {
        "model": "enterprise manager for storage management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "13.4.0.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.2.1.3.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2y"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.15.0"
      },
      {
        "model": "enterprise manager ops center",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.4.0.0"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.3.5"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.4"
      },
      {
        "model": "tenable.sc",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.17.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.6"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.1"
      },
      {
        "model": "hitachi device manager",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "rv3000",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "hitachi tuning manager",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "hitachi ops center common services",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "tenable.sc",
        "scope": null,
        "trust": 0.8,
        "vendor": "tenable",
        "version": null
      },
      {
        "model": "openssl",
        "scope": null,
        "trust": 0.8,
        "vendor": "openssl",
        "version": null
      },
      {
        "model": "hitachi ops center analyzer viewpoint",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "164890"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "165209"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2021-23841",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-23841",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.8,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "VHN-382524",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 5.9,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.2,
            "id": "CVE-2021-23841",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 5.9,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2021-23841",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-23841",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-23841",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202102-1200",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-382524",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). There is no information about this vulnerability at present. Please keep an eye on CNNVD or manufacturer announcements. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. 8) - noarch\n\n3. Description:\n\nEDK (Embedded Development Kit) is a project to enable UEFI support for\nVirtual Machines. This package contains a sample 64-bit UEFI firmware for\nQEMU and KVM. \n\nThe following packages have been upgraded to a later upstream version: edk2\n(20210527gite1999b264f1f). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: openssl security update\nAdvisory ID:       RHSA-2021:3798-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:3798\nIssue date:        2021-10-12\nCVE Names:         CVE-2021-23840 CVE-2021-23841 \n=====================================================================\n\n1. Summary:\n\nAn update for openssl is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nppc64:\nopenssl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390x.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-perl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-static-1.0.2k-22.el7_9.s390.rpm\nopenssl-static-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYWWqjtzjgjWX9erEAQj4lg/+IFxqmMQqLSvyz8cKUAPgss/+/wFMpRgh\nZZxYBQQ0cBFfWFlROVLaRdeiGcZYkyJCRDqy2Yb8YO1A4PnSOc+htLFYmSmU2kcm\nQLHinOzGEZo/44vN7Qsl4WhJkJIdlysCwKpkkOCUprMEnhlWMvja2eSSG9JLH16d\nRqGe4AsJQLKSKLgmhejCOqxb9am+t9zBW0zaZHP4UR52Ju1rG5rLjBJ85Gcrmp2B\nvp/GVEQ/Asid4MZA2WTx+s6wj5Dt7JOdLWrUbcYAC0I8oPWbAoZJTfPkM7S6Xv+U\n68iruVFTh74IkCbQ+SNLoYjiDAVJqtAVRVBha7Fd3/gWR6aJLLaqluLRGvd0mwXY\npohCS0ynuMQ9wtYOJ3ezSVcBN+/d9Hs/3s8RWQTzrNG6jtBe57H9/tNkeSVFSVvu\nPMKXsUoOrIUE2HCflJytDB9wkQmsWxiZoH/xVlrtD0D11egZ4EWjJL6x+xtCTAkT\nu67CAwsCKxxCeNmz42uBtXSwFXoUapJnsviGzAx247T2pyuXlYMYHlsOy7CtBvIk\njEEosCMM72UyXO4XsYTXc0jM3ze6iQTcF9irwhy+X+rTB4IXBubdUEoT0jnKlwfI\nBQvoPEBlcG+f0VU8BL+FCOosvM0ZqC7KGGOwJLoG1Vqz8rbtmhpcmNAOvzUiHdm3\nT4OjSl1NzQQ=\n=Taj2\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.2.10 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. \n\nClusters and applications are all visible and managed from a single console\n\u2014 with security policy built in. See the following Release Notes documentation, which\nwill be updated shortly for this release, for additional details about this\nrelease:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2021-3795 semver-regex: inefficient regular expression complexity\n\n* CVE-2021-23440 nodejs-set-value: type confusion allows bypass of\nCVE-2019-10747\n\nRelated bugs: \n\n* RHACM 2.2.10 images (Bugzilla #2013652)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2006009 - CVE-2021-3795 semver-regex: inefficient regular expression complexity\n2013652 - RHACM 2.2.10 images\n\n5. ==========================================================================\nUbuntu Security Notice USN-4745-1\nFebruary 23, 2021\n\nopenssl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 14.04 ESM\n- Ubuntu 12.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in OpenSSL. (CVE-2021-23841)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 14.04 ESM:\n  libssl1.0.0                     1.0.1f-1ubuntu2.27+esm2\n\nUbuntu 12.04 ESM:\n  libssl1.0.0                     1.0.1-4ubuntu5.45\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should  be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 10\npackages that are part of the JBoss Core Services offering. \n\nThis release serves as a replacement for Red Hat JBoss Core Services Apache\nHTTP Server 2.4.37 Service Pack 9 and includes bug fixes and enhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "164890"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "165209"
      },
      {
        "db": "PACKETSTORM",
        "id": "161525"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      }
    ],
    "trust": 2.43
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-382524",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-23841",
        "trust": 4.1
      },
      {
        "db": "TENABLE",
        "id": "TNS-2021-03",
        "trust": 1.7
      },
      {
        "db": "TENABLE",
        "id": "TNS-2021-09",
        "trust": 1.7
      },
      {
        "db": "PULSESECURE",
        "id": "SA44846",
        "trust": 1.7
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-21-336-06",
        "trust": 1.4
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.4
      },
      {
        "db": "PACKETSTORM",
        "id": "161525",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164927",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "165002",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164890",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94508446",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90348129",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162151",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165096",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164583",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165099",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "162823",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "161459",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165129",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "162041",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164489",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0974",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0786",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3792",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0636",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3375",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4095",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0916",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4172",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4104",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3485",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1618",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4059",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3499",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4019",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0670",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3846",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0958",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0897",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1015",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1225",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0696",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3905",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3935",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4254",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0859",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1794",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0832",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1502",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4229",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0992",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164562",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "161450",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021041501",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022022131",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021120313",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021102116",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071618",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071832",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021051226",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021052505",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021101933",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032007",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021052508",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021042109",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021111137",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021101330",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021111733",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164928",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162824",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162826",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-382524",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165286",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165209",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164967",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "164890"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "165209"
      },
      {
        "db": "PACKETSTORM",
        "id": "161525"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "id": "VAR-202102-1488",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2025-12-22T22:38:54.865000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "hitachi-sec-2023-126",
        "trust": 0.8,
        "url": "https://www.debian.org/security/2021/dsa-4855"
      },
      {
        "title": "OpenSSL Enter the fix for the verification error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=142812"
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-476",
        "trust": 1.1
      },
      {
        "problemtype": "Integer overflow or wraparound (CWE-190) [NVD evaluation ]",
        "trust": 0.8
      },
      {
        "problemtype": "CWE-190",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
      },
      {
        "trust": 2.3,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 1.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44846"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20210219-0009/"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20210513-0002/"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht212528"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht212529"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht212534"
      },
      {
        "trust": 1.7,
        "url": "https://www.openssl.org/news/secadv/20210216.txt"
      },
      {
        "trust": 1.7,
        "url": "https://www.tenable.com/security/tns-2021-03"
      },
      {
        "trust": 1.7,
        "url": "https://www.tenable.com/security/tns-2021-09"
      },
      {
        "trust": 1.7,
        "url": "https://www.debian.org/security/2021/dsa-4855"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2021/may/67"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2021/may/70"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2021/may/68"
      },
      {
        "trust": 1.7,
        "url": "https://security.gentoo.org/glsa/202103-03"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.4,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-336-06"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=122a19ab48091c657f7cb1fb3af9fc07bd557bbf"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu94508446/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90348129/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.7,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=122a19ab48091c657f7cb1fb3af9fc07bd557bbf"
      },
      {
        "trust": 0.7,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0916"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0958"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022022131"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0832"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-openssl-may-affect-ibm-workload-scheduler-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0636"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-ibm-spectrum-protect-backup-archive-client-netapp-services-cve-2020-1971-cve-2021-23840-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3792"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilites-affect-engineering-lifecycle-management-and-ibm-engineering-products/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1015"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-publicly-disclosed-vulnerabilities-affect-messagegateway-cve-2021-23841-cve-2021-23840/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164890/red-hat-security-advisory-2021-4198-03.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162041/gentoo-linux-security-advisory-202103-03.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071618"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-z-tpf-is-affected-by-openssl-vulnerabilities/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021120313"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161525/ubuntu-security-notice-usn-4745-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1618"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162823/apple-security-advisory-2021-05-25-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212529"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-aix-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6486335"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/openssl-null-pointer-dereference-via-x509-issuer-and-serial-hash-34598"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-vulnerabilites-impacting-aspera-high-speed-transfer-server-aspera-high-speed-transfer-endpoint-aspera-desktop-client-4-0-and-earlier-cve-2021-23839-cve-2021-23840-cve/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3485"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021042109"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164562/red-hat-security-advisory-2021-3925-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-aix-cve-2021-23839-cve-2021-23840-and-cve-2021-23841-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-for-ibm-i-is-affected-by-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-security-vulnerabilities-fixed-in-openssl-as-shipped-with-ibm-security-verify-products/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-vulnerability-was-identified-and-remediated-in-the-ibm-maas360-cloud-extender-v2-103-000-051-and-modules/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021111137"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0859"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-openssl-affect-ibm-tivoli-netcool-system-service-monitors-application-service-monitors/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164927/red-hat-security-advisory-2021-4614-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-openssl-vulnerabilities-affect-ibm-connectdirect-for-hp-nonstop/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021051226"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0897"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0974"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6487493"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3846"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1502"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212534"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1225"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-websphere-mq-for-hp-nonstop-server-is-affected-by-multiple-openssl-vulnerabilities-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4019"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161459/ubuntu-security-notice-usn-4738-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-for-hp-nonstop-server-is-affected-by-openssl-vulnerabilities-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021111733"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021041501"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-sterling-connectexpress-for-unix-is-affected-by-multiple-vulnerabilities-in-openssl-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3375"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4104"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021101933"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6479349"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1794"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3499"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032007"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021052508"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021101330"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021052505"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165129/red-hat-security-advisory-2021-4902-06.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071832"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0696"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-affect-ibm-sdk-for-node-js-in-ibm-cloud-5/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164583/red-hat-security-advisory-2021-3949-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerability-in-openssl-affects-ibm-rational-clearcase-cve-2020-1971-cve-2021-23839-cve-2021-23840-cve-2021-23841-cve-2021-23839-cve-2021-23840-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165096/red-hat-security-advisory-2021-4845-05.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3935"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0786"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6507581"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165002/red-hat-security-advisory-2021-4032-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165099/red-hat-security-advisory-2021-4848-07.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161450/openssl-toolkit-1.1.1j.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0670"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0992"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-ibm-spectrum-protect-backup-archive-client-netapp-services-cve-2020-1971-cve-2021-23840-cve-2021-23841-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6490371"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164489/red-hat-security-advisory-2021-3798-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-vulnerabilites-impacting-aspera-high-speed-transfer-server-aspera-high-speed-transfer-endpoint-aspera-desktop-client-4-0-and-earlier-cve-2021-23839-cve-2021-23840-cve-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021102116"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3778"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2018-20673"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20266"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3796"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14145"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.3,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35522"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35524"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35521"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35523"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#low"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5128"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37137"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21409"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4198"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3798"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36385"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5038"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43267"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3795"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20317"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23440"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4745-1"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1971"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23133"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3573"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26141"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26147"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26144"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20197"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3487"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0427"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36312"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31829"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31440"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26145"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3564"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10001"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35448"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3489"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28971"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26146"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26139"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3679"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24588"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36158"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24504"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3348"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24503"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29646"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-0129"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26143"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33200"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29660"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26140"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3600"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20239"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28950"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4032"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26691"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13950"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17567"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17567"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13950"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "164890"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "165209"
      },
      {
        "db": "PACKETSTORM",
        "id": "161525"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "164890"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "165209"
      },
      {
        "db": "PACKETSTORM",
        "id": "161525"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-16T00:00:00",
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "date": "2021-12-15T15:20:33",
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "date": "2021-11-10T17:13:18",
        "db": "PACKETSTORM",
        "id": "164890"
      },
      {
        "date": "2021-10-13T14:47:32",
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "date": "2021-12-09T14:50:37",
        "db": "PACKETSTORM",
        "id": "165209"
      },
      {
        "date": "2021-02-24T14:50:51",
        "db": "PACKETSTORM",
        "id": "161525"
      },
      {
        "date": "2021-11-15T17:25:56",
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "date": "2021-11-17T15:25:40",
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "date": "2021-11-11T14:53:11",
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "date": "2021-02-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "date": "2021-05-14T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "date": "2021-02-16T17:15:13.377000",
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "date": "2022-09-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      },
      {
        "date": "2023-07-20T06:25:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "date": "2024-11-21T05:51:55.460000",
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "161525"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "OpenSSL\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1200"
      }
    ],
    "trust": 0.6
  }
}

VAR-202110-1622

Vulnerability from variot - Updated: 2025-12-22 22:37

A memory corruption issue was addressed with improved memory handling. This issue is fixed in iOS 14.8 and iPadOS 14.8, Safari 15, tvOS 15, iOS 15 and iPadOS 15, watchOS 8. Processing maliciously crafted web content may lead to arbitrary code execution. plural Apple The product contains a vulnerability related to out-of-bounds writes.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. iOS 15 and iPadOS 15. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: webkit2gtk3 security, bug fix, and enhancement update Advisory ID: RHSA-2022:1777-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1777 Issue date: 2022-05-10 CVE Names: CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 =====================================================================

  1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

The following packages have been upgraded to a later upstream version: webkit2gtk3 (2.34.6).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.34.6-1.el8.src.rpm

aarch64: webkit2gtk3-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm

ppc64le: webkit2gtk3-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm

s390x: webkit2gtk3-2.34.6-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm

x86_64: webkit2gtk3-2.34.6-1.el8.i686.rpm webkit2gtk3-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm webkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-30809 https://access.redhat.com/security/cve/CVE-2021-30818 https://access.redhat.com/security/cve/CVE-2021-30823 https://access.redhat.com/security/cve/CVE-2021-30836 https://access.redhat.com/security/cve/CVE-2021-30846 https://access.redhat.com/security/cve/CVE-2021-30848 https://access.redhat.com/security/cve/CVE-2021-30849 https://access.redhat.com/security/cve/CVE-2021-30851 https://access.redhat.com/security/cve/CVE-2021-30884 https://access.redhat.com/security/cve/CVE-2021-30887 https://access.redhat.com/security/cve/CVE-2021-30888 https://access.redhat.com/security/cve/CVE-2021-30889 https://access.redhat.com/security/cve/CVE-2021-30890 https://access.redhat.com/security/cve/CVE-2021-30897 https://access.redhat.com/security/cve/CVE-2021-30934 https://access.redhat.com/security/cve/CVE-2021-30936 https://access.redhat.com/security/cve/CVE-2021-30951 https://access.redhat.com/security/cve/CVE-2021-30952 https://access.redhat.com/security/cve/CVE-2021-30953 https://access.redhat.com/security/cve/CVE-2021-30954 https://access.redhat.com/security/cve/CVE-2021-30984 https://access.redhat.com/security/cve/CVE-2021-45481 https://access.redhat.com/security/cve/CVE-2021-45482 https://access.redhat.com/security/cve/CVE-2021-45483 https://access.redhat.com/security/cve/CVE-2022-22589 https://access.redhat.com/security/cve/CVE-2022-22590 https://access.redhat.com/security/cve/CVE-2022-22592 https://access.redhat.com/security/cve/CVE-2022-22594 https://access.redhat.com/security/cve/CVE-2022-22620 https://access.redhat.com/security/cve/CVE-2022-22637 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc.

Alternatively, on your watch, select "My Watch > General > About". -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2021-10-26-9 Additional information for APPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15

iOS 15 and iPadOS 15 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212814.

Accessory Manager Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory consumption issue was addressed with improved memory handling. CVE-2021-30837: Siddharth Aeri (@b1n4r1b01)

AppleMobileFileIntegrity Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to read sensitive information Description: This issue was addressed with improved checks. CVE-2021-30811: an anonymous researcher working with Compartir

Apple Neural Engine Available for devices with Apple Neural Engine: iPhone 8 and later, iPad Pro (3rd generation) and later, iPad Air (3rd generation) and later, and iPad mini (5th generation) Impact: A malicious application may be able to execute arbitrary code with system privileges on devices with an Apple Neural Engine Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30838: proteas wang

bootp Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A device may be passively tracked by its WiFi MAC address Description: A user privacy issue was addressed by removing the broadcast MAC address. CVE-2021-30866: Fabien Duchêne of UCLouvain (Belgium) Entry added October 25, 2021

CoreAudio Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a malicious audio file may result in unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30834: JunDong Xie of Ant Security Light-Year Lab Entry added October 25, 2021

CoreML Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30825: hjy79425575 working with Trend Micro Zero Day Initiative

Face ID Available for devices with Face ID: iPhone X, iPhone XR, iPhone XS (all models), iPhone 11 (all models), iPhone 12 (all models), iPad Pro (11-inch), and iPad Pro (3rd generation) Impact: A 3D model constructed to look like the enrolled user may be able to authenticate via Face ID Description: This issue was addressed by improving Face ID anti- spoofing models. CVE-2021-30863: Wish Wu (吴潍浠 @wish_wu) of Ant-financial Light-Year Security Lab

FaceTime Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker with physical access to a device may be able to see private contact information Description: The issue was addressed with improved permissions logic. CVE-2021-30816: Atharv (@atharv0x0) Entry added October 25, 2021

FaceTime Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application with microphone permission may unexpectedly access microphone input during a FaceTime call Description: A logic issue was addressed with improved validation. CVE-2021-30882: Adam Bellard and Spencer Reitman of Airtime Entry added October 25, 2021

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted font may result in the disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30831: Xingwei Lin of Ant Security Light-Year Lab Entry added October 25, 2021

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30840: Xingwei Lin of Ant Security Light-Year Lab Entry added October 25, 2021

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab

Foundation Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved memory handling. CVE-2021-30852: Yinyi Wu (@3ndy1) of Ant Security Light-Year Lab Entry added October 25, 2021

iCloud Photo Library Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to access photo metadata without needing permission to access photos Description: The issue was addressed with improved authentication. CVE-2021-30867: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 25, 2021

ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved input validation. CVE-2021-30814: hjy79425575 Entry added October 25, 2021

ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30835: Ye Zhang of Baidu Security CVE-2021-30847: Mike Zhang of Pangu Lab

Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2021-30857: Zweig of Kunlun Lab

libexpat Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed by updating expat to version 2.4.1. CVE-2013-0340: an anonymous researcher

Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30819: Apple

NetworkExtension Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A VPN configuration may be installed by an app without user permission Description: An authorization issue was addressed with improved state management. CVE-2021-30874: Javier Vieira Boccardo (linkedin.com/javier-vieira- boccardo) Entry added October 25, 2021

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to access restricted files Description: A validation issue existed in the handling of symlinks. CVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved state management. CVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Quick Look Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Previewing an html file attached to a note may unexpectedly contact remote servers Description: A logic issue existed in the handling of document loads. CVE-2021-30870: Saif Hamed Al Hinai Oman CERT Entry added October 25, 2021

Sandbox Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to modify protected parts of the file system Description: This issue was addressed with improved checks. CVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 25, 2021

Siri Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to view contacts from the lock screen Description: A lock screen issue allowed access to contacts on a locked device. CVE-2021-30815: an anonymous researcher

Telephony Available for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad Air 2, iPad (5th generation), and iPad mini 4 Impact: In certain situations, the baseband would fail to enable integrity and ciphering protection Description: A logic issue was addressed with improved state management. CVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST SysSec Lab

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Visiting a maliciously crafted website may reveal a user's browsing history Description: The issue was resolved with additional restrictions on CSS compositing. CVE-2021-30884: an anonymous researcher Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A type confusion issue was addressed with improved state handling. CVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted audio file may disclose restricted memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30836: Peter Nguyen Vu Hoang of STAR Labs Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-30809: an anonymous researcher Entry added October 25, 2021

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30846: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30848: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30849: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption vulnerability was addressed with improved locking. CVE-2021-30851: Samuel Groß of Google Project Zero

Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker in physical proximity may be able to force a user onto a malicious Wi-Fi network during device setup Description: An authorization issue was addressed with improved state management. CVE-2021-30810: an anonymous researcher

Additional recognition

Assets We would like to acknowledge Cees Elzinga for their assistance.

Bluetooth We would like to acknowledge an anonymous researcher for their assistance.

File System We would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their assistance.

Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.

UIKit We would like to acknowledge an anonymous researcher for their assistance.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "15"

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmF4hy0ACgkQeC9qKD1p rhiHNRAAwUaVHgd+whk6qGBZ3PYqSbvvuuo00rLW6JIqv9dwpEh9BBD//bSsUppb 41J5VaNoKDsonTLhXt0Mhn66wmhbGjLneMIoNb7ffl7O2xDQaWAr+HmoUm6wOo48 Kqj/wJGNJJov4ucBA6InpUz1ZevEhaPU4QMNedVck4YSl1GhtSTJsBAzVkMakQhX uJ1fVdOJ5konmmQJLYxDUo60xqS0sZPchkwCM1zwR/SAZ70pt6P0MGI1Yddjcn1U loAcKYVgkKAc9RWkXRskR1RxFBGivTI/gy5pDkLxfGfwFecf6PSR7MDki4xDeoVH 5FWXBwga8Uc/afGRqnFwTpdsisRZP8rQFwMam1T/DwgrWD8R2CCn/wOcvbtlWMIv LczYCJFMELaXOjFF5duXaUJme97567OypYvhjBDtiIPg5MCGhZZCmpbRjkcUBZNJ YQOELzq6CHWc96mjPOt34B0X2VXGhvgpQ0/evvcQe3bHv0F7N/acAlgsGe+e4Jn8 k0gWZocq+fPnl6YYgZKIGgcZWUl5bdqduApesEtpRU2ug2TE+xMOhMZXb1WLawJl n/OtVHhIjft23r0MGgyWTIHMPe5DRvEPWGI3DS+55JX6XOxSGp9o6xgOAraZR4U6 HO/WbQOwj7SSKbyPxmDTp4OMyFPukbe92WIMh5EpFcILp6GTJqQ= =lg51 -----END PGP SIGNATURE-----

. CVE-2021-30851: Samuel Groß of Google Project Zero

Installation note:

This update may be obtained from the Mac App Store. Apple is aware of a report that this issue may have been actively exploited. Apple is aware of a report that this issue may have been actively exploited. CVE-2021-30846: Sergei Glazunov of Google Project Zero Entry added September 20, 2021

Additional recognition

CoreML We would like to acknowledge hjy79425575 working with Trend Micro Zero Day Initiative for their assistance. ========================================================================== Ubuntu Security Notice USN-5127-1 November 01, 2021

webkit2gtk vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 21.10
  • Ubuntu 21.04
  • Ubuntu 20.04 LTS

Summary:

Several security issues were fixed in WebKitGTK.

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 21.10: libjavascriptcoregtk-4.0-18 2.34.1-0ubuntu0.21.10.1 libwebkit2gtk-4.0-37 2.34.1-0ubuntu0.21.10.1

Ubuntu 21.04: libjavascriptcoregtk-4.0-18 2.34.1-0ubuntu0.21.04.1 libwebkit2gtk-4.0-37 2.34.1-0ubuntu0.21.04.1

Ubuntu 20.04 LTS: libjavascriptcoregtk-4.0-18 2.34.1-0ubuntu0.20.04.1 libwebkit2gtk-4.0-37 2.34.1-0ubuntu0.20.04.1

This update uses a new upstream release, which includes additional bug fixes. After a standard system update you need to restart any applications that use WebKitGTK, such as Epiphany, to make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202202-01


                                       https://security.gentoo.org/

Severity: High Title: WebkitGTK+: Multiple vulnerabilities Date: February 01, 2022 Bugs: #779175, #801400, #813489, #819522, #820434, #829723, #831739 ID: 202202-01


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in the arbitrary execution of code.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.34.4 >= 2.34.4

Description

Multiple vulnerabilities have been discovered in WebkitGTK+. Please review the CVE identifiers referenced below for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.34.4"

References

[ 1 ] CVE-2021-30848 https://nvd.nist.gov/vuln/detail/CVE-2021-30848 [ 2 ] CVE-2021-30888 https://nvd.nist.gov/vuln/detail/CVE-2021-30888 [ 3 ] CVE-2021-30682 https://nvd.nist.gov/vuln/detail/CVE-2021-30682 [ 4 ] CVE-2021-30889 https://nvd.nist.gov/vuln/detail/CVE-2021-30889 [ 5 ] CVE-2021-30666 https://nvd.nist.gov/vuln/detail/CVE-2021-30666 [ 6 ] CVE-2021-30665 https://nvd.nist.gov/vuln/detail/CVE-2021-30665 [ 7 ] CVE-2021-30890 https://nvd.nist.gov/vuln/detail/CVE-2021-30890 [ 8 ] CVE-2021-30661 https://nvd.nist.gov/vuln/detail/CVE-2021-30661 [ 9 ] WSA-2021-0005 https://webkitgtk.org/security/WSA-2021-0005.html [ 10 ] CVE-2021-30761 https://nvd.nist.gov/vuln/detail/CVE-2021-30761 [ 11 ] CVE-2021-30897 https://nvd.nist.gov/vuln/detail/CVE-2021-30897 [ 12 ] CVE-2021-30823 https://nvd.nist.gov/vuln/detail/CVE-2021-30823 [ 13 ] CVE-2021-30734 https://nvd.nist.gov/vuln/detail/CVE-2021-30734 [ 14 ] CVE-2021-30934 https://nvd.nist.gov/vuln/detail/CVE-2021-30934 [ 15 ] CVE-2021-1871 https://nvd.nist.gov/vuln/detail/CVE-2021-1871 [ 16 ] CVE-2021-30762 https://nvd.nist.gov/vuln/detail/CVE-2021-30762 [ 17 ] WSA-2021-0006 https://webkitgtk.org/security/WSA-2021-0006.html [ 18 ] CVE-2021-30797 https://nvd.nist.gov/vuln/detail/CVE-2021-30797 [ 19 ] CVE-2021-30936 https://nvd.nist.gov/vuln/detail/CVE-2021-30936 [ 20 ] CVE-2021-30663 https://nvd.nist.gov/vuln/detail/CVE-2021-30663 [ 21 ] CVE-2021-1825 https://nvd.nist.gov/vuln/detail/CVE-2021-1825 [ 22 ] CVE-2021-30951 https://nvd.nist.gov/vuln/detail/CVE-2021-30951 [ 23 ] CVE-2021-30952 https://nvd.nist.gov/vuln/detail/CVE-2021-30952 [ 24 ] CVE-2021-1788 https://nvd.nist.gov/vuln/detail/CVE-2021-1788 [ 25 ] CVE-2021-1820 https://nvd.nist.gov/vuln/detail/CVE-2021-1820 [ 26 ] CVE-2021-30953 https://nvd.nist.gov/vuln/detail/CVE-2021-30953 [ 27 ] CVE-2021-30749 https://nvd.nist.gov/vuln/detail/CVE-2021-30749 [ 28 ] CVE-2021-30849 https://nvd.nist.gov/vuln/detail/CVE-2021-30849 [ 29 ] CVE-2021-1826 https://nvd.nist.gov/vuln/detail/CVE-2021-1826 [ 30 ] CVE-2021-30836 https://nvd.nist.gov/vuln/detail/CVE-2021-30836 [ 31 ] CVE-2021-30954 https://nvd.nist.gov/vuln/detail/CVE-2021-30954 [ 32 ] CVE-2021-30984 https://nvd.nist.gov/vuln/detail/CVE-2021-30984 [ 33 ] CVE-2021-30851 https://nvd.nist.gov/vuln/detail/CVE-2021-30851 [ 34 ] CVE-2021-30758 https://nvd.nist.gov/vuln/detail/CVE-2021-30758 [ 35 ] CVE-2021-42762 https://nvd.nist.gov/vuln/detail/CVE-2021-42762 [ 36 ] CVE-2021-1844 https://nvd.nist.gov/vuln/detail/CVE-2021-1844 [ 37 ] CVE-2021-30689 https://nvd.nist.gov/vuln/detail/CVE-2021-30689 [ 38 ] CVE-2021-45482 https://nvd.nist.gov/vuln/detail/CVE-2021-45482 [ 39 ] CVE-2021-30858 https://nvd.nist.gov/vuln/detail/CVE-2021-30858 [ 40 ] CVE-2021-21779 https://nvd.nist.gov/vuln/detail/CVE-2021-21779 [ 41 ] WSA-2021-0004 https://webkitgtk.org/security/WSA-2021-0004.html [ 42 ] CVE-2021-30846 https://nvd.nist.gov/vuln/detail/CVE-2021-30846 [ 43 ] CVE-2021-30744 https://nvd.nist.gov/vuln/detail/CVE-2021-30744 [ 44 ] CVE-2021-30809 https://nvd.nist.gov/vuln/detail/CVE-2021-30809 [ 45 ] CVE-2021-30884 https://nvd.nist.gov/vuln/detail/CVE-2021-30884 [ 46 ] CVE-2021-30720 https://nvd.nist.gov/vuln/detail/CVE-2021-30720 [ 47 ] CVE-2021-30799 https://nvd.nist.gov/vuln/detail/CVE-2021-30799 [ 48 ] CVE-2021-30795 https://nvd.nist.gov/vuln/detail/CVE-2021-30795 [ 49 ] CVE-2021-1817 https://nvd.nist.gov/vuln/detail/CVE-2021-1817 [ 50 ] CVE-2021-21775 https://nvd.nist.gov/vuln/detail/CVE-2021-21775 [ 51 ] CVE-2021-30887 https://nvd.nist.gov/vuln/detail/CVE-2021-30887 [ 52 ] CVE-2021-21806 https://nvd.nist.gov/vuln/detail/CVE-2021-21806 [ 53 ] CVE-2021-30818 https://nvd.nist.gov/vuln/detail/CVE-2021-30818

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202202-01

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202110-1622",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.8"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.1"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.8"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "33"
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "tvos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "watchos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164242"
      }
    ],
    "trust": 0.5
  },
  "cve": "CVE-2021-30846",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-30846",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.8,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-390579",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2021-30846",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Local",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2021-30846",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-30846",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-30846",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "VULHUB",
            "id": "VHN-390579",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A memory corruption issue was addressed with improved memory handling. This issue is fixed in iOS 14.8 and iPadOS 14.8, Safari 15, tvOS 15, iOS 15 and iPadOS 15, watchOS 8. Processing maliciously crafted web content may lead to arbitrary code execution. plural Apple The product contains a vulnerability related to out-of-bounds writes.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. iOS 15 and iPadOS 15. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: webkit2gtk3 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2022:1777-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1777\nIssue date:        2022-05-10\nCVE Names:         CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 \n                   CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 \n                   CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 \n                   CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 \n                   CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 \n                   CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 \n                   CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 \n                   CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 \n                   CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 \n                   CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 \n=====================================================================\n\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nThe following packages have been upgraded to a later upstream version:\nwebkit2gtk3 (2.34.6). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.34.6-1.el8.src.rpm\n\naarch64:\nwebkit2gtk3-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-30809\nhttps://access.redhat.com/security/cve/CVE-2021-30818\nhttps://access.redhat.com/security/cve/CVE-2021-30823\nhttps://access.redhat.com/security/cve/CVE-2021-30836\nhttps://access.redhat.com/security/cve/CVE-2021-30846\nhttps://access.redhat.com/security/cve/CVE-2021-30848\nhttps://access.redhat.com/security/cve/CVE-2021-30849\nhttps://access.redhat.com/security/cve/CVE-2021-30851\nhttps://access.redhat.com/security/cve/CVE-2021-30884\nhttps://access.redhat.com/security/cve/CVE-2021-30887\nhttps://access.redhat.com/security/cve/CVE-2021-30888\nhttps://access.redhat.com/security/cve/CVE-2021-30889\nhttps://access.redhat.com/security/cve/CVE-2021-30890\nhttps://access.redhat.com/security/cve/CVE-2021-30897\nhttps://access.redhat.com/security/cve/CVE-2021-30934\nhttps://access.redhat.com/security/cve/CVE-2021-30936\nhttps://access.redhat.com/security/cve/CVE-2021-30951\nhttps://access.redhat.com/security/cve/CVE-2021-30952\nhttps://access.redhat.com/security/cve/CVE-2021-30953\nhttps://access.redhat.com/security/cve/CVE-2021-30954\nhttps://access.redhat.com/security/cve/CVE-2021-30984\nhttps://access.redhat.com/security/cve/CVE-2021-45481\nhttps://access.redhat.com/security/cve/CVE-2021-45482\nhttps://access.redhat.com/security/cve/CVE-2021-45483\nhttps://access.redhat.com/security/cve/CVE-2022-22589\nhttps://access.redhat.com/security/cve/CVE-2022-22590\nhttps://access.redhat.com/security/cve/CVE-2022-22592\nhttps://access.redhat.com/security/cve/CVE-2022-22594\nhttps://access.redhat.com/security/cve/CVE-2022-22620\nhttps://access.redhat.com/security/cve/CVE-2022-22637\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-10-26-9 Additional information for\nAPPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15\n\niOS 15 and iPadOS 15 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212814. \n\nAccessory Manager\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2021-30837: Siddharth Aeri (@b1n4r1b01)\n\nAppleMobileFileIntegrity\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to read sensitive information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30811: an anonymous researcher working with Compartir\n\nApple Neural Engine\nAvailable for devices with Apple Neural Engine: iPhone 8 and later,\niPad Pro (3rd generation) and later, iPad Air (3rd generation) and\nlater, and iPad mini (5th generation) \nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges on devices with an Apple Neural Engine\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30838: proteas wang\n\nbootp\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A device may be passively tracked by its WiFi MAC address\nDescription: A user privacy issue was addressed by removing the\nbroadcast MAC address. \nCVE-2021-30866: Fabien Duch\u00eane of UCLouvain (Belgium)\nEntry added October 25, 2021\n\nCoreAudio\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a malicious audio file may result in unexpected\napplication termination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30834: JunDong Xie of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\nCoreML\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30825: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nFace ID\nAvailable for devices with Face ID: iPhone X, iPhone XR, iPhone XS\n(all models), iPhone 11 (all models), iPhone 12 (all models), iPad\nPro (11-inch), and iPad Pro (3rd generation)\nImpact: A 3D model constructed to look like the enrolled user may be\nable to authenticate via Face ID\nDescription: This issue was addressed by improving Face ID anti-\nspoofing models. \nCVE-2021-30863: Wish Wu (\u5434\u6f4d\u6d60 @wish_wu) of Ant-financial Light-Year\nSecurity Lab\n\nFaceTime\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker with physical access to a device may be able to\nsee private contact information\nDescription: The issue was addressed with improved permissions logic. \nCVE-2021-30816: Atharv (@atharv0x0)\nEntry added October 25, 2021\n\nFaceTime\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application with microphone permission may unexpectedly\naccess microphone input during a FaceTime call\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30882: Adam Bellard and Spencer Reitman of Airtime\nEntry added October 25, 2021\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted font may result in the\ndisclosure of process memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30831: Xingwei Lin of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30840: Xingwei Lin of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab\n\nFoundation\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved\nmemory handling. \nCVE-2021-30852: Yinyi Wu (@3ndy1) of Ant Security Light-Year Lab\nEntry added October 25, 2021\n\niCloud Photo Library\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to access photo metadata\nwithout needing permission to access photos\nDescription: The issue was addressed with improved authentication. \nCVE-2021-30867: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 25, 2021\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\ninput validation. \nCVE-2021-30814: hjy79425575\nEntry added October 25, 2021\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30835: Ye Zhang of Baidu Security\nCVE-2021-30847: Mike Zhang of Pangu Lab\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2021-30857: Zweig of Kunlun Lab\n\nlibexpat\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed by updating expat to version\n2.4.1. \nCVE-2013-0340: an anonymous researcher\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30819: Apple\n\nNetworkExtension\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A VPN configuration may be installed by an app without user\npermission\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2021-30874: Javier Vieira Boccardo (linkedin.com/javier-vieira-\nboccardo)\nEntry added October 25, 2021\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to access restricted files\nDescription: A validation issue existed in the handling of symlinks. \nCVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nQuick Look\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Previewing an html file attached to a note may unexpectedly\ncontact remote servers\nDescription: A logic issue existed in the handling of document loads. \nCVE-2021-30870: Saif Hamed Al Hinai Oman CERT\nEntry added October 25, 2021\n\nSandbox\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: This issue was addressed with improved checks. \nCVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 25, 2021\n\nSiri\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to view contacts from the lock\nscreen\nDescription: A lock screen issue allowed access to contacts on a\nlocked device. \nCVE-2021-30815: an anonymous researcher\n\nTelephony\nAvailable for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad\nAir 2, iPad (5th generation), and iPad mini 4\nImpact: In certain situations, the baseband would fail to enable\nintegrity and ciphering protection\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST\nSysSec Lab\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Visiting a maliciously crafted website may reveal a user\u0027s\nbrowsing history\nDescription: The issue was resolved with additional restrictions on\nCSS compositing. \nCVE-2021-30884: an anonymous researcher\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted audio file may disclose\nrestricted memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30836: Peter Nguyen Vu Hoang of STAR Labs\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30809: an anonymous researcher\nEntry added October 25, 2021\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30846: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30848: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30849: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption vulnerability was addressed with\nimproved locking. \nCVE-2021-30851: Samuel Gro\u00df of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker in physical proximity may be able to force a user\nonto a malicious Wi-Fi network during device setup\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2021-30810: an anonymous researcher\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nFile System\nWe would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"15\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmF4hy0ACgkQeC9qKD1p\nrhiHNRAAwUaVHgd+whk6qGBZ3PYqSbvvuuo00rLW6JIqv9dwpEh9BBD//bSsUppb\n41J5VaNoKDsonTLhXt0Mhn66wmhbGjLneMIoNb7ffl7O2xDQaWAr+HmoUm6wOo48\nKqj/wJGNJJov4ucBA6InpUz1ZevEhaPU4QMNedVck4YSl1GhtSTJsBAzVkMakQhX\nuJ1fVdOJ5konmmQJLYxDUo60xqS0sZPchkwCM1zwR/SAZ70pt6P0MGI1Yddjcn1U\nloAcKYVgkKAc9RWkXRskR1RxFBGivTI/gy5pDkLxfGfwFecf6PSR7MDki4xDeoVH\n5FWXBwga8Uc/afGRqnFwTpdsisRZP8rQFwMam1T/DwgrWD8R2CCn/wOcvbtlWMIv\nLczYCJFMELaXOjFF5duXaUJme97567OypYvhjBDtiIPg5MCGhZZCmpbRjkcUBZNJ\nYQOELzq6CHWc96mjPOt34B0X2VXGhvgpQ0/evvcQe3bHv0F7N/acAlgsGe+e4Jn8\nk0gWZocq+fPnl6YYgZKIGgcZWUl5bdqduApesEtpRU2ug2TE+xMOhMZXb1WLawJl\nn/OtVHhIjft23r0MGgyWTIHMPe5DRvEPWGI3DS+55JX6XOxSGp9o6xgOAraZR4U6\nHO/WbQOwj7SSKbyPxmDTp4OMyFPukbe92WIMh5EpFcILp6GTJqQ=\n=lg51\n-----END PGP SIGNATURE-----\n\n\n. \nCVE-2021-30851: Samuel Gro\u00df of Google Project Zero\n\nInstallation note:\n\nThis update may be obtained from the Mac App Store. Apple is aware of a report that this issue may have\nbeen actively exploited. Apple is aware of a report that this issue\nmay have been actively exploited. \nCVE-2021-30846: Sergei Glazunov of Google Project Zero\nEntry added September 20, 2021\n\nAdditional recognition\n\nCoreML\nWe would like to acknowledge hjy79425575 working with Trend Micro\nZero Day Initiative for their assistance. ==========================================================================\nUbuntu Security Notice USN-5127-1\nNovember 01, 2021\n\nwebkit2gtk vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 21.04\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in WebKitGTK. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n  libjavascriptcoregtk-4.0-18     2.34.1-0ubuntu0.21.10.1\n  libwebkit2gtk-4.0-37            2.34.1-0ubuntu0.21.10.1\n\nUbuntu 21.04:\n  libjavascriptcoregtk-4.0-18     2.34.1-0ubuntu0.21.04.1\n  libwebkit2gtk-4.0-37            2.34.1-0ubuntu0.21.04.1\n\nUbuntu 20.04 LTS:\n  libjavascriptcoregtk-4.0-18     2.34.1-0ubuntu0.20.04.1\n  libwebkit2gtk-4.0-37            2.34.1-0ubuntu0.20.04.1\n\nThis update uses a new upstream release, which includes additional bug\nfixes. After a standard system update you need to restart any applications\nthat use WebKitGTK, such as Epiphany, to make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202202-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: February 01, 2022\n     Bugs: #779175, #801400, #813489, #819522, #820434, #829723,\n           #831739\n       ID: 202202-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk        \u003c 2.34.4                    \u003e= 2.34.4\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebkitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.34.4\"\n\nReferences\n=========\n[ 1 ] CVE-2021-30848\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30848\n[ 2 ] CVE-2021-30888\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30888\n[ 3 ] CVE-2021-30682\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30682\n[ 4 ] CVE-2021-30889\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30889\n[ 5 ] CVE-2021-30666\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30666\n[ 6 ] CVE-2021-30665\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30665\n[ 7 ] CVE-2021-30890\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30890\n[ 8 ] CVE-2021-30661\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30661\n[ 9 ] WSA-2021-0005\n      https://webkitgtk.org/security/WSA-2021-0005.html\n[ 10 ] CVE-2021-30761\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30761\n[ 11 ] CVE-2021-30897\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30897\n[ 12 ] CVE-2021-30823\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30823\n[ 13 ] CVE-2021-30734\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30734\n[ 14 ] CVE-2021-30934\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30934\n[ 15 ] CVE-2021-1871\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1871\n[ 16 ] CVE-2021-30762\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30762\n[ 17 ] WSA-2021-0006\n      https://webkitgtk.org/security/WSA-2021-0006.html\n[ 18 ] CVE-2021-30797\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30797\n[ 19 ] CVE-2021-30936\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30936\n[ 20 ] CVE-2021-30663\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30663\n[ 21 ] CVE-2021-1825\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1825\n[ 22 ] CVE-2021-30951\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30951\n[ 23 ] CVE-2021-30952\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30952\n[ 24 ] CVE-2021-1788\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1788\n[ 25 ] CVE-2021-1820\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1820\n[ 26 ] CVE-2021-30953\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30953\n[ 27 ] CVE-2021-30749\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30749\n[ 28 ] CVE-2021-30849\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30849\n[ 29 ] CVE-2021-1826\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1826\n[ 30 ] CVE-2021-30836\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30836\n[ 31 ] CVE-2021-30954\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30954\n[ 32 ] CVE-2021-30984\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30984\n[ 33 ] CVE-2021-30851\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30851\n[ 34 ] CVE-2021-30758\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30758\n[ 35 ] CVE-2021-42762\n      https://nvd.nist.gov/vuln/detail/CVE-2021-42762\n[ 36 ] CVE-2021-1844\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1844\n[ 37 ] CVE-2021-30689\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30689\n[ 38 ] CVE-2021-45482\n      https://nvd.nist.gov/vuln/detail/CVE-2021-45482\n[ 39 ] CVE-2021-30858\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30858\n[ 40 ] CVE-2021-21779\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21779\n[ 41 ] WSA-2021-0004\n       https://webkitgtk.org/security/WSA-2021-0004.html\n[ 42 ] CVE-2021-30846\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30846\n[ 43 ] CVE-2021-30744\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30744\n[ 44 ] CVE-2021-30809\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30809\n[ 45 ] CVE-2021-30884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30884\n[ 46 ] CVE-2021-30720\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30720\n[ 47 ] CVE-2021-30799\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30799\n[ 48 ] CVE-2021-30795\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30795\n[ 49 ] CVE-2021-1817\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1817\n[ 50 ] CVE-2021-21775\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21775\n[ 51 ] CVE-2021-30887\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30887\n[ 52 ] CVE-2021-21806\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21806\n[ 53 ] CVE-2021-30818\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30818\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202202-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30846"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164242"
      },
      {
        "db": "PACKETSTORM",
        "id": "164736"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      }
    ],
    "trust": 2.52
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-390579",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-30846",
        "trust": 3.6
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/10/27/4",
        "trust": 1.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/10/27/2",
        "trust": 1.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/10/26/9",
        "trust": 1.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/10/27/1",
        "trust": 1.1
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164689",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "167037",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164688",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164736",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164692",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164693",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-390579",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30846",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164233",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164242",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165794",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30846"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164242"
      },
      {
        "db": "PACKETSTORM",
        "id": "164736"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "id": "VAR-202110-1622",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:37:19.368000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT212819 Apple\u00a0 Security update",
        "trust": 0.8,
        "url": "https://www.debian.org/security/2021/dsa-4995"
      },
      {
        "title": "Apple: iOS 15 and iPadOS 15",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=apple_security_advisories\u0026qid=f34e9bd3d1b055a84fc033981719f5fb"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-30846"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.1
      },
      {
        "problemtype": "Out-of-bounds writing (CWE-787) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "http://seclists.org/fulldisclosure/2021/oct/62"
      },
      {
        "trust": 1.9,
        "url": "http://seclists.org/fulldisclosure/2021/oct/63"
      },
      {
        "trust": 1.9,
        "url": "http://seclists.org/fulldisclosure/2021/oct/60"
      },
      {
        "trust": 1.9,
        "url": "http://seclists.org/fulldisclosure/2021/oct/61"
      },
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht212869"
      },
      {
        "trust": 1.1,
        "url": "https://www.debian.org/security/2021/dsa-4995"
      },
      {
        "trust": 1.1,
        "url": "https://www.debian.org/security/2021/dsa-4996"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/en-us/ht212807"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/en-us/ht212814"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/en-us/ht212815"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/en-us/ht212816"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/en-us/ht212819"
      },
      {
        "trust": 1.1,
        "url": "http://www.openwall.com/lists/oss-security/2021/10/26/9"
      },
      {
        "trust": 1.1,
        "url": "http://www.openwall.com/lists/oss-security/2021/10/27/1"
      },
      {
        "trust": 1.1,
        "url": "http://www.openwall.com/lists/oss-security/2021/10/27/2"
      },
      {
        "trust": 1.1,
        "url": "http://www.openwall.com/lists/oss-security/2021/10/27/4"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/h6mgxcx7p5ahwoq6irt477ukt7is4dad/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/on5sdvvpvpcagfpw2ghyatzvzylpw2l4/"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.5,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.5,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30841"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30842"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2013-0340"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30835"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30810"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30857"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30847"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30855"
      },
      {
        "trust": 0.3,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30811"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30837"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30888"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30952"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30984"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45482"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30953"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30936"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30897"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30954"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30887"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30934"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30951"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30889"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30890"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30815"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212814."
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30838"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30825"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30826"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30854"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30819"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30834"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30831"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30814"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30840"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30858"
      },
      {
        "trust": 0.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/on5sdvvpvpcagfpw2ghyatzvzylpw2l4/"
      },
      {
        "trust": 0.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/h6mgxcx7p5ahwoq6irt477ukt7is4dad/"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/fulldisclosure/2021/sep/37"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht212814"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22592"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22637"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22620"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30863"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30852"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212819."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30866"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30816"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212816."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30859"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212807."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30860"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/webkit2gtk/2.34.1-0ubuntu0.21.04.1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/webkit2gtk/2.34.1-0ubuntu0.21.10.1"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-5127-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/webkit2gtk/2.34.1-0ubuntu0.20.04.1"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30744"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1820"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0005.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30682"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30663"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1817"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42762"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30758"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30799"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21779"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202202-01"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1871"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30665"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30795"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30734"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30797"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21775"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30689"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0004.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30720"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1788"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0006.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21806"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30846"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164242"
      },
      {
        "db": "PACKETSTORM",
        "id": "164736"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30846"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164242"
      },
      {
        "db": "PACKETSTORM",
        "id": "164736"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-10-19T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "date": "2022-05-11T15:50:41",
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "date": "2021-09-22T16:22:10",
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "date": "2021-10-28T14:58:43",
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "date": "2021-10-28T14:55:28",
        "db": "PACKETSTORM",
        "id": "164689"
      },
      {
        "date": "2021-10-28T14:55:07",
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "date": "2021-09-22T16:30:10",
        "db": "PACKETSTORM",
        "id": "164242"
      },
      {
        "date": "2021-11-02T15:22:56",
        "db": "PACKETSTORM",
        "id": "164736"
      },
      {
        "date": "2022-02-01T17:03:05",
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "date": "2022-09-29T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "date": "2021-10-19T14:15:09.617000",
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-11-23T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390579"
      },
      {
        "date": "2022-09-29T02:23:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      },
      {
        "date": "2023-11-07T03:33:31.383000",
        "db": "NVD",
        "id": "CVE-2021-30846"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164736"
      }
    ],
    "trust": 0.1
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural \u00a0Apple\u00a0 Out-of-bounds write vulnerabilities in the product",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-013862"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "spoof, code execution",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164689"
      }
    ],
    "trust": 0.2
  }
}

VAR-201912-1864

Vulnerability from variot - Updated: 2025-12-22 22:36

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iCloud for Windows 11.0. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * * information leak * * User impersonation * * Arbitrary code execution * * UI Spoofing * * Insufficient access restrictions * * Service operation interruption (DoS) * * Privilege escalation * * Memory corruption * * Authentication bypass. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple iOS prior to 13.1; iPadOS prior to 13.1; Windows-based Apple iTunes prior to 12.10.1; Windows-based iCloud prior to 11.0; watchOS prior to 6; and prior to tvOS 13. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

The compliance-operator image updates are now available for OpenShift Container Platform 4.6.

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Solution:

For OpenShift Container Platform 4.6 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.6/updating/updating-cluster - -cli.html. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. 1891285 - Common templates and kubevirt-config cm - update machine-type 1891440 - [v2v][VMware to CNV VM import API]Source VM with no network interface fail with unclear error 1892227 - [SSP] cluster scoped resources are not being reconciled 1893278 - openshift-virtualization-os-images namespace not seen by user 1893646 - [HCO] Pod placement configuration - dry run is not performed for all the configuration stanza 1894428 - Message for VMI not migratable is not clear enough 1894824 - [v2v][VM import] Pick the smallest template for the imported VM, and not always Medium 1894897 - [v2v][VMIO] VMimport CR is not reported as failed when target VM is deleted during the import 1895414 - Virt-operator is accepting updates to the placement of its workload components even with running VMs 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1898072 - Add Fedora33 to Fedora common templates 1898840 - [v2v] VM import VMWare to CNV Import 63 chars vm name should not fail 1899558 - CNV 2.6 - nmstate fails to set state 1901480 - VM disk io can't worked if namespace have label kubemacpool 1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig) 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1903014 - hco-webhook pod in CreateContainerError 1903585 - [v2v] Windows 2012 VM imported from RHV goes into Windows repair mode 1904797 - [VMIO][vmware] A migrated RHEL/Windows VM starts in emergency mode/safe mode when target storage is NFS and target namespace is NOT "default" 1906199 - [CNV-2.5] CNV Tries to Install on Windows Workers 1907151 - kubevirt version is not reported correctly via virtctl 1907352 - VM/VMI link changes to kubevirt.io~v1~VirtualMachineInstance on CNV 2.6 1907691 - [CNV] Configuring NodeNetworkConfigurationPolicy caused "Internal error occurred" for creating datavolume 1907988 - VM loses dynamic IP address of its default interface after migration 1908363 - Applying NodeNetworkConfigurationPolicy for different NIC than default disables br-ex bridge and nodes lose connectivity 1908421 - [v2v] [VM import RHV to CNV] Windows imported VM boot failed: INACCESSIBLE BOOT DEVICE error 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1909458 - [V2V][VMware to CNV VM import via api using VMIO] VM import to Ceph RBD/BLOCK fails on "qemu-img: /data/disk.img" error 1910857 - Provide a mechanism to enable the HotplugVolumes feature gate via HCO 1911118 - Windows VMI LiveMigration / shutdown fails on 'XML error: non unique alias detected: ua-') 1911396 - Set networkInterfaceMultiqueue false in rhel 6 template for e1000e interface 1911662 - el6 guests don't work properly if virtio bus is specified on various devices 1912908 - Allow using "scsi" bus for disks in template validation 1913248 - Creating vlan interface on top of a bond device via NodeNetworkConfigurationPolicy fails 1913320 - Informative message needed with virtctl image-upload, that additional step is needed from the user 1913717 - Users should have read permitions for golden images data volumes 1913756 - Migrating to Ceph-RBD + Block fails when skipping zeroes 1914177 - CNV does not preallocate blank file data volumes 1914608 - Obsolete CPU models (kubevirt-cpu-plugin-configmap) are set on worker nodes 1914947 - HPP golden images - DV shoudld not be created with WaitForFirstConsumer 1917908 - [VMIO] vmimport pod fail to create when using ceph-rbd/block 1917963 - [CNV 2.6] Unable to install CNV disconnected - requires kvm-info-nfd-plugin which is not mirrored 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1920576 - HCO can report ready=true when it failed to create a CR for a component operator 1920610 - e2e-aws-4.7-cnv consistently failing on Hyperconverged Cluster Operator 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923979 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool 1927373 - NoExecute taint violates pdb; VMIs are not live migrated 1931376 - VMs disconnected from nmstate-defined bridge after CNV-2.5.4->CNV-2.6.0 upgrade

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: webkitgtk4 security, bug fix, and enhancement update Advisory ID: RHSA-2020:4035-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2020:4035 Issue date: 2020-09-29 CVE Names: CVE-2019-6237 CVE-2019-6251 CVE-2019-8506 CVE-2019-8524 CVE-2019-8535 CVE-2019-8536 CVE-2019-8544 CVE-2019-8551 CVE-2019-8558 CVE-2019-8559 CVE-2019-8563 CVE-2019-8571 CVE-2019-8583 CVE-2019-8584 CVE-2019-8586 CVE-2019-8587 CVE-2019-8594 CVE-2019-8595 CVE-2019-8596 CVE-2019-8597 CVE-2019-8601 CVE-2019-8607 CVE-2019-8608 CVE-2019-8609 CVE-2019-8610 CVE-2019-8611 CVE-2019-8615 CVE-2019-8619 CVE-2019-8622 CVE-2019-8623 CVE-2019-8625 CVE-2019-8644 CVE-2019-8649 CVE-2019-8658 CVE-2019-8666 CVE-2019-8669 CVE-2019-8671 CVE-2019-8672 CVE-2019-8673 CVE-2019-8674 CVE-2019-8676 CVE-2019-8677 CVE-2019-8678 CVE-2019-8679 CVE-2019-8680 CVE-2019-8681 CVE-2019-8683 CVE-2019-8684 CVE-2019-8686 CVE-2019-8687 CVE-2019-8688 CVE-2019-8689 CVE-2019-8690 CVE-2019-8707 CVE-2019-8710 CVE-2019-8719 CVE-2019-8720 CVE-2019-8726 CVE-2019-8733 CVE-2019-8735 CVE-2019-8743 CVE-2019-8763 CVE-2019-8764 CVE-2019-8765 CVE-2019-8766 CVE-2019-8768 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8821 CVE-2019-8822 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-11070 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-10018 CVE-2020-11793 ==================================================================== 1. Summary:

An update for webkitgtk4 is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2). (BZ#1817144)

Security Fix(es):

  • webkitgtk: Multiple security issues (CVE-2019-6237, CVE-2019-6251, CVE-2019-8506, CVE-2019-8524, CVE-2019-8535, CVE-2019-8536, CVE-2019-8544, CVE-2019-8551, CVE-2019-8558, CVE-2019-8559, CVE-2019-8563, CVE-2019-8571, CVE-2019-8583, CVE-2019-8584, CVE-2019-8586, CVE-2019-8587, CVE-2019-8594, CVE-2019-8595, CVE-2019-8596, CVE-2019-8597, CVE-2019-8601, CVE-2019-8607, CVE-2019-8608, CVE-2019-8609, CVE-2019-8610, CVE-2019-8611, CVE-2019-8615, CVE-2019-8619, CVE-2019-8622, CVE-2019-8623, CVE-2019-8625, CVE-2019-8644, CVE-2019-8649, CVE-2019-8658, CVE-2019-8666, CVE-2019-8669, CVE-2019-8671, CVE-2019-8672, CVE-2019-8673, CVE-2019-8674, CVE-2019-8676, CVE-2019-8677, CVE-2019-8678, CVE-2019-8679, CVE-2019-8680, CVE-2019-8681, CVE-2019-8683, CVE-2019-8684, CVE-2019-8686, CVE-2019-8687, CVE-2019-8688, CVE-2019-8689, CVE-2019-8690, CVE-2019-8707, CVE-2019-8710, CVE-2019-8719, CVE-2019-8720, CVE-2019-8726, CVE-2019-8733, CVE-2019-8735, CVE-2019-8743, CVE-2019-8763, CVE-2019-8764, CVE-2019-8765, CVE-2019-8766, CVE-2019-8768, CVE-2019-8769, CVE-2019-8771, CVE-2019-8782, CVE-2019-8783, CVE-2019-8808, CVE-2019-8811, CVE-2019-8812, CVE-2019-8813, CVE-2019-8814, CVE-2019-8815, CVE-2019-8816, CVE-2019-8819, CVE-2019-8820, CVE-2019-8821, CVE-2019-8822, CVE-2019-8823, CVE-2019-8835, CVE-2019-8844, CVE-2019-8846, CVE-2019-11070, CVE-2020-3862, CVE-2020-3864, CVE-2020-3865, CVE-2020-3867, CVE-2020-3868, CVE-2020-3885, CVE-2020-3894, CVE-2020-3895, CVE-2020-3897, CVE-2020-3899, CVE-2020-3900, CVE-2020-3901, CVE-2020-3902, CVE-2020-10018, CVE-2020-11793)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

These packages are GPG signed by Red Hat for security. References:

https://access.redhat.com/security/cve/CVE-2019-6237 https://access.redhat.com/security/cve/CVE-2019-6251 https://access.redhat.com/security/cve/CVE-2019-8506 https://access.redhat.com/security/cve/CVE-2019-8524 https://access.redhat.com/security/cve/CVE-2019-8535 https://access.redhat.com/security/cve/CVE-2019-8536 https://access.redhat.com/security/cve/CVE-2019-8544 https://access.redhat.com/security/cve/CVE-2019-8551 https://access.redhat.com/security/cve/CVE-2019-8558 https://access.redhat.com/security/cve/CVE-2019-8559 https://access.redhat.com/security/cve/CVE-2019-8563 https://access.redhat.com/security/cve/CVE-2019-8571 https://access.redhat.com/security/cve/CVE-2019-8583 https://access.redhat.com/security/cve/CVE-2019-8584 https://access.redhat.com/security/cve/CVE-2019-8586 https://access.redhat.com/security/cve/CVE-2019-8587 https://access.redhat.com/security/cve/CVE-2019-8594 https://access.redhat.com/security/cve/CVE-2019-8595 https://access.redhat.com/security/cve/CVE-2019-8596 https://access.redhat.com/security/cve/CVE-2019-8597 https://access.redhat.com/security/cve/CVE-2019-8601 https://access.redhat.com/security/cve/CVE-2019-8607 https://access.redhat.com/security/cve/CVE-2019-8608 https://access.redhat.com/security/cve/CVE-2019-8609 https://access.redhat.com/security/cve/CVE-2019-8610 https://access.redhat.com/security/cve/CVE-2019-8611 https://access.redhat.com/security/cve/CVE-2019-8615 https://access.redhat.com/security/cve/CVE-2019-8619 https://access.redhat.com/security/cve/CVE-2019-8622 https://access.redhat.com/security/cve/CVE-2019-8623 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8644 https://access.redhat.com/security/cve/CVE-2019-8649 https://access.redhat.com/security/cve/CVE-2019-8658 https://access.redhat.com/security/cve/CVE-2019-8666 https://access.redhat.com/security/cve/CVE-2019-8669 https://access.redhat.com/security/cve/CVE-2019-8671 https://access.redhat.com/security/cve/CVE-2019-8672 https://access.redhat.com/security/cve/CVE-2019-8673 https://access.redhat.com/security/cve/CVE-2019-8674 https://access.redhat.com/security/cve/CVE-2019-8676 https://access.redhat.com/security/cve/CVE-2019-8677 https://access.redhat.com/security/cve/CVE-2019-8678 https://access.redhat.com/security/cve/CVE-2019-8679 https://access.redhat.com/security/cve/CVE-2019-8680 https://access.redhat.com/security/cve/CVE-2019-8681 https://access.redhat.com/security/cve/CVE-2019-8683 https://access.redhat.com/security/cve/CVE-2019-8684 https://access.redhat.com/security/cve/CVE-2019-8686 https://access.redhat.com/security/cve/CVE-2019-8687 https://access.redhat.com/security/cve/CVE-2019-8688 https://access.redhat.com/security/cve/CVE-2019-8689 https://access.redhat.com/security/cve/CVE-2019-8690 https://access.redhat.com/security/cve/CVE-2019-8707 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8719 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8726 https://access.redhat.com/security/cve/CVE-2019-8733 https://access.redhat.com/security/cve/CVE-2019-8735 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8763 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8765 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8768 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8821 https://access.redhat.com/security/cve/CVE-2019-8822 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-11070 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2020 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBX3OjINzjgjWX9erEAQjqsg/9FnSEJ3umFx0gtnsZIVRP9YxMIVZhVQ8z rNnK/LGQWq1nPlNC5OF60WRcWA7cC74lh1jl/+xU6p+9JXTq9y9hQTd7Fcf+6T01 RYj2zJe6kGBY/53rhZJKCdb9zNXz1CkqsuvTPqVGIabUWTTlsBFnd6l4GK6QL4kM XVQZyWtmSfmLII4Ocdav9WocJzH6o1TbEo+O9Fm6WjdVOK+/+VzPki0/dW50CQAK R8u5tTXZR5m52RLmvhs/LTv3yUnmhEkhvrR0TtuR8KRfcP1/ytNwn3VidFefuAO1 PWrgpjIPWy/kbtZaZWK4fBblYj6bKCVD1SiBKQcOfCq0f16aqRP2niFoDXdAy467 eGu0JHkRsIRCLG2rY+JfOau5KtLRhRr0iRe5AhOVpAtUelzjAvEQEcVv4GmZXcwX rXfeagSjWzdo8Mf55d7pjORXAKhGdO3FQSeiCvzq9miZq3NBX4Jm4raobeskw/rJ 1ONqg4fE7Gv7rks8QOy5xErwI8Ut1TGJAgYOD8rmRptr05hBWQFJCfmoc4KpxsMe PJoRag0AZfYxYoMe5avMcGCYHosU63z3wS7gao9flj37NkEi6M134vGmCpPNmpGr w5HQly9SO3mD0a92xOUn42rrXq841ZkVu89fR6j9wBn8NAKLWH6eUjZkVMNmLRzh PKg+HFNkMjk=dS3G -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://www.redhat.com/mailman/listinfo/rhsa-announce . To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-29-8 Additional information for APPLE-SA-2019-9-26-5 watchOS 6

watchOS 6 addresses the following:

Audio Available for: Apple Watch Series 3 and later Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2019-8753: Łukasz Pilorz of Standard Chartered GBS Poland Entry added October 29, 2019

CoreAudio Available for: Apple Watch Series 3 and later Impact: Processing a maliciously crafted movie may result in the disclosure of process memory Description: A memory corruption issue was addressed with improved validation. CVE-2019-8705: riusksk of VulWar Corp working with Trend Micro's Zero Day Initiative Entry added October 29, 2019

CoreCrypto Available for: Apple Watch Series 3 and later Impact: Processing a large input may lead to a denial of service Description: A denial of service issue was addressed with improved input validation. CVE-2019-8741: Nicky Mouha of NIST Entry added October 29, 2019

Foundation Available for: Apple Watch Series 3 and later Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2019-8641: Samuel Groß and Natalie Silvanovich of Google Project Zero CVE-2019-8746: Natalie Silvanovich and Samuel Groß of Google Project Zero Entry added October 29, 2019

IOUSBDeviceFamily Available for: Apple Watch Series 3 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8718: Joshua Hill and Sem Voigtländer Entry added October 29, 2019

Kernel Available for: Apple Watch Series 3 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption vulnerability was addressed with improved locking. CVE-2019-8740: Mohamed Ghannam (@_simo36) Entry added October 29, 2019

Kernel Available for: Apple Watch Series 3 and later Impact: A local app may be able to read a persistent account identifier Description: A validation issue was addressed with improved logic. CVE-2019-8809: Apple Entry added October 29, 2019

Kernel Available for: Apple Watch Series 3 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8717: Jann Horn of Google Project Zero Entry added October 29, 2019

Kernel Available for: Apple Watch Series 3 and later Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8712: Mohamed Ghannam (@_simo36) Entry added October 29, 2019

Kernel Available for: Apple Watch Series 3 and later Impact: A malicious application may be able to determine kernel memory layout Description: A memory corruption issue existed in the handling of IPv6 packets. CVE-2019-8744: Zhuo Liang of Qihoo 360 Vulcan Team Entry added October 29, 2019

Kernel Available for: Apple Watch Series 3 and later Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2019-8749: found by OSS-Fuzz CVE-2019-8756: found by OSS-Fuzz Entry added October 29, 2019

mDNSResponder Available for: Apple Watch Series 3 and later Impact: An attacker in physical proximity may be able to passively observe device names in AWDL communications Description: This issue was resolved by replacing device names with a random identifier. CVE-2019-8799: David Kreitschmann and Milan Stute of Secure Mobile Networking Lab at Technische Universität Darmstadt Entry added October 29, 2019

UIFoundation Available for: Apple Watch Series 3 and later Impact: Processing a maliciously crafted text file may lead to arbitrary code execution Description: A buffer overflow was addressed with improved bounds checking. CVE-2019-8710: found by OSS-Fuzz CVE-2019-8728: Junho Jang of LINE Security Team and Hanul Choi of ABLY Corporation CVE-2019-8734: found by OSS-Fuzz CVE-2019-8751: Dongzhuo Zhao working with ADLab of Venustech CVE-2019-8752: Dongzhuo Zhao working with ADLab of Venustech CVE-2019-8773: found by OSS-Fuzz

Additional recognition

Audio We would like to acknowledge riusksk of VulWar Corp working with Trend Micro's Zero Day Initiative for their assistance. Entry added October 29, 2019

boringssl We would like to acknowledge Thijs Alkemade (@xnyhps) of Computest for their assistance.

HomeKit We would like to acknowledge Tian Zhang for their assistance.

Kernel We would like to acknowledge Brandon Azad of Google Project Zero for their assistance.

mDNSResponder We would like to acknowledge Gregor Lang of e.solutions GmbH for their assistance.

Profiles We would like to acknowledge Erik Johnson of Vernon Hills High School and James Seeley (@Code4iOS) of Shriver Job Corps for their assistance.

Safari We would like to acknowledge Yiğit Can YILMAZ (@yilmazcanyigit) of TurkishKit for their assistance.

WebKit We would like to acknowledge MinJeong Kim of Information Security Lab, Chungnam National University, JaeCheol Ryou of the Information Security Lab, Chungnam National University in South Korea and cc working with Trend Micro's Zero Day Initiative for their assistance.

Installation note:

Instructions on how to update your Apple Watch software are available at https://support.apple.com/kb/HT204641

To check the version on your Apple Watch, open the Apple Watch app on your iPhone and select "My Watch > General > About".

Alternatively, on your watch, select "My Watch > General > About"

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1864",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 11.0 earlier"
      },
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.15 earlier"
      },
      {
        "model": "ios",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "12.10.2 for windows earlier"
      },
      {
        "model": "macos catalina",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.15.1 earlier"
      },
      {
        "model": "macos high sierra",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.13.6 (security update 2019-006 not applied )"
      },
      {
        "model": "macos mojave",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "10.14.6 (security update 2019-001 not applied )"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.0.3 earlier"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.2 earlier"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "6.1 earlier"
      },
      {
        "model": "xcode",
        "scope": "lt",
        "trust": 0.8,
        "vendor": "apple",
        "version": "11.2 earlier"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_catalina",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_high_sierra",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:mac_os_mojave",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:watchos",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:xcode",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2019-8710",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8710",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160145",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8710",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8710",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201911-017",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160145",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8710",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iCloud for Windows 11.0. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Has released an update for each product.The expected impact depends on each vulnerability, but can be affected as follows: * * information leak * * User impersonation * * Arbitrary code execution * * UI Spoofing * * Insufficient access restrictions * * Service operation interruption (DoS) * * Privilege escalation * * Memory corruption * * Authentication bypass. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple iOS prior to 13.1; iPadOS prior to 13.1; Windows-based Apple iTunes prior to 12.10.1; Windows-based iCloud prior to 11.0; watchOS prior to 6; and prior to tvOS 13. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThe compliance-operator image updates are now available for OpenShift\nContainer Platform 4.6. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Solution:\n\nFor OpenShift Container Platform 4.6 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.6/updating/updating-cluster\n- -cli.html. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. \n1891285 - Common templates and kubevirt-config cm - update machine-type\n1891440 - [v2v][VMware to CNV VM import API]Source VM with no network interface fail with unclear error\n1892227 - [SSP] cluster scoped resources are not being reconciled\n1893278 - openshift-virtualization-os-images namespace not seen by user\n1893646 - [HCO] Pod placement configuration - dry run is not performed for all the configuration stanza\n1894428 - Message for VMI not migratable is not clear enough\n1894824 - [v2v][VM import] Pick the smallest template for the imported VM, and not always Medium\n1894897 - [v2v][VMIO] VMimport CR is not reported as failed when target VM is deleted during the import\n1895414 - Virt-operator is accepting updates to the placement of its workload components even with running VMs\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1898072 - Add Fedora33 to Fedora common templates\n1898840 - [v2v] VM import VMWare to CNV Import 63 chars vm name should not fail\n1899558 - CNV 2.6 - nmstate fails to set state\n1901480 - VM disk io can\u0027t worked if namespace have label kubemacpool\n1902046 - Not possible to edit CDIConfig (through CDI CR / CDIConfig)\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1903014 - hco-webhook pod in CreateContainerError\n1903585 - [v2v] Windows 2012 VM imported from RHV goes into Windows repair mode\n1904797 - [VMIO][vmware] A migrated RHEL/Windows VM starts in emergency mode/safe mode when target storage is NFS and target namespace is NOT \"default\"\n1906199 - [CNV-2.5] CNV Tries to Install on Windows Workers\n1907151 - kubevirt version is not reported correctly via virtctl\n1907352 - VM/VMI link changes to `kubevirt.io~v1~VirtualMachineInstance` on CNV 2.6\n1907691 - [CNV] Configuring NodeNetworkConfigurationPolicy caused \"Internal error occurred\" for creating datavolume\n1907988 - VM loses dynamic IP address of its default interface after migration\n1908363 - Applying NodeNetworkConfigurationPolicy for different NIC than default disables br-ex bridge and nodes lose connectivity\n1908421 - [v2v] [VM import RHV to CNV] Windows imported VM boot failed: INACCESSIBLE BOOT DEVICE error\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1909458 - [V2V][VMware to CNV VM import via api using VMIO] VM import  to Ceph RBD/BLOCK fails on \"qemu-img: /data/disk.img\" error\n1910857 - Provide a mechanism to enable the HotplugVolumes feature gate via HCO\n1911118 - Windows VMI LiveMigration / shutdown fails on \u0027XML error: non unique alias detected: ua-\u0027)\n1911396 - Set networkInterfaceMultiqueue false in rhel 6 template for e1000e interface\n1911662 - el6 guests don\u0027t work properly if virtio bus is specified on various devices\n1912908 - Allow using \"scsi\" bus for disks in template validation\n1913248 - Creating vlan interface on top of a bond device via NodeNetworkConfigurationPolicy fails\n1913320 - Informative message needed with virtctl image-upload, that additional step is needed from the user\n1913717 - Users should have read permitions for golden images data volumes\n1913756 - Migrating to Ceph-RBD + Block fails when skipping zeroes\n1914177 - CNV does not preallocate blank file data volumes\n1914608 - Obsolete CPU models (kubevirt-cpu-plugin-configmap) are set on worker nodes\n1914947 - HPP golden images - DV shoudld not be created with WaitForFirstConsumer\n1917908 - [VMIO] vmimport pod fail to create when using ceph-rbd/block\n1917963 - [CNV 2.6] Unable to install CNV disconnected - requires kvm-info-nfd-plugin which is not mirrored\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1920576 - HCO can report ready=true when it failed to create a CR for a component operator\n1920610 - e2e-aws-4.7-cnv consistently failing on Hyperconverged Cluster Operator\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923979 - kubernetes-nmstate: nmstate-handler pod crashes when configuring bridge device using ip tool\n1927373 - NoExecute taint violates pdb; VMIs are not live migrated\n1931376 - VMs disconnected from nmstate-defined bridge after CNV-2.5.4-\u003eCNV-2.6.0 upgrade\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: webkitgtk4 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:4035-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:4035\nIssue date:        2020-09-29\nCVE Names:         CVE-2019-6237 CVE-2019-6251 CVE-2019-8506\n                   CVE-2019-8524 CVE-2019-8535 CVE-2019-8536\n                   CVE-2019-8544 CVE-2019-8551 CVE-2019-8558\n                   CVE-2019-8559 CVE-2019-8563 CVE-2019-8571\n                   CVE-2019-8583 CVE-2019-8584 CVE-2019-8586\n                   CVE-2019-8587 CVE-2019-8594 CVE-2019-8595\n                   CVE-2019-8596 CVE-2019-8597 CVE-2019-8601\n                   CVE-2019-8607 CVE-2019-8608 CVE-2019-8609\n                   CVE-2019-8610 CVE-2019-8611 CVE-2019-8615\n                   CVE-2019-8619 CVE-2019-8622 CVE-2019-8623\n                   CVE-2019-8625 CVE-2019-8644 CVE-2019-8649\n                   CVE-2019-8658 CVE-2019-8666 CVE-2019-8669\n                   CVE-2019-8671 CVE-2019-8672 CVE-2019-8673\n                   CVE-2019-8674 CVE-2019-8676 CVE-2019-8677\n                   CVE-2019-8678 CVE-2019-8679 CVE-2019-8680\n                   CVE-2019-8681 CVE-2019-8683 CVE-2019-8684\n                   CVE-2019-8686 CVE-2019-8687 CVE-2019-8688\n                   CVE-2019-8689 CVE-2019-8690 CVE-2019-8707\n                   CVE-2019-8710 CVE-2019-8719 CVE-2019-8720\n                   CVE-2019-8726 CVE-2019-8733 CVE-2019-8735\n                   CVE-2019-8743 CVE-2019-8763 CVE-2019-8764\n                   CVE-2019-8765 CVE-2019-8766 CVE-2019-8768\n                   CVE-2019-8769 CVE-2019-8771 CVE-2019-8782\n                   CVE-2019-8783 CVE-2019-8808 CVE-2019-8811\n                   CVE-2019-8812 CVE-2019-8813 CVE-2019-8814\n                   CVE-2019-8815 CVE-2019-8816 CVE-2019-8819\n                   CVE-2019-8820 CVE-2019-8821 CVE-2019-8822\n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844\n                   CVE-2019-8846 CVE-2019-11070 CVE-2020-3862\n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867\n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894\n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899\n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902\n                   CVE-2020-10018 CVE-2020-11793\n====================================================================\n1. Summary:\n\nAn update for webkitgtk4 is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). (BZ#1817144)\n\nSecurity Fix(es):\n\n* webkitgtk: Multiple security issues (CVE-2019-6237, CVE-2019-6251,\nCVE-2019-8506, CVE-2019-8524, CVE-2019-8535, CVE-2019-8536, CVE-2019-8544,\nCVE-2019-8551, CVE-2019-8558, CVE-2019-8559, CVE-2019-8563, CVE-2019-8571,\nCVE-2019-8583, CVE-2019-8584, CVE-2019-8586, CVE-2019-8587, CVE-2019-8594,\nCVE-2019-8595, CVE-2019-8596, CVE-2019-8597, CVE-2019-8601, CVE-2019-8607,\nCVE-2019-8608, CVE-2019-8609, CVE-2019-8610, CVE-2019-8611, CVE-2019-8615,\nCVE-2019-8619, CVE-2019-8622, CVE-2019-8623, CVE-2019-8625, CVE-2019-8644,\nCVE-2019-8649, CVE-2019-8658, CVE-2019-8666, CVE-2019-8669, CVE-2019-8671,\nCVE-2019-8672, CVE-2019-8673, CVE-2019-8674, CVE-2019-8676, CVE-2019-8677,\nCVE-2019-8678, CVE-2019-8679, CVE-2019-8680, CVE-2019-8681, CVE-2019-8683,\nCVE-2019-8684, CVE-2019-8686, CVE-2019-8687, CVE-2019-8688, CVE-2019-8689,\nCVE-2019-8690, CVE-2019-8707, CVE-2019-8710, CVE-2019-8719, CVE-2019-8720,\nCVE-2019-8726, CVE-2019-8733, CVE-2019-8735, CVE-2019-8743, CVE-2019-8763,\nCVE-2019-8764, CVE-2019-8765, CVE-2019-8766, CVE-2019-8768, CVE-2019-8769,\nCVE-2019-8771, CVE-2019-8782, CVE-2019-8783, CVE-2019-8808, CVE-2019-8811,\nCVE-2019-8812, CVE-2019-8813, CVE-2019-8814, CVE-2019-8815, CVE-2019-8816,\nCVE-2019-8819, CVE-2019-8820, CVE-2019-8821, CVE-2019-8822, CVE-2019-8823,\nCVE-2019-8835, CVE-2019-8844, CVE-2019-8846, CVE-2019-11070, CVE-2020-3862,\nCVE-2020-3864, CVE-2020-3865, CVE-2020-3867, CVE-2020-3868, CVE-2020-3885,\nCVE-2020-3894, CVE-2020-3895, CVE-2020-3897, CVE-2020-3899, CVE-2020-3900,\nCVE-2020-3901, CVE-2020-3902, CVE-2020-10018, CVE-2020-11793)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-6237\nhttps://access.redhat.com/security/cve/CVE-2019-6251\nhttps://access.redhat.com/security/cve/CVE-2019-8506\nhttps://access.redhat.com/security/cve/CVE-2019-8524\nhttps://access.redhat.com/security/cve/CVE-2019-8535\nhttps://access.redhat.com/security/cve/CVE-2019-8536\nhttps://access.redhat.com/security/cve/CVE-2019-8544\nhttps://access.redhat.com/security/cve/CVE-2019-8551\nhttps://access.redhat.com/security/cve/CVE-2019-8558\nhttps://access.redhat.com/security/cve/CVE-2019-8559\nhttps://access.redhat.com/security/cve/CVE-2019-8563\nhttps://access.redhat.com/security/cve/CVE-2019-8571\nhttps://access.redhat.com/security/cve/CVE-2019-8583\nhttps://access.redhat.com/security/cve/CVE-2019-8584\nhttps://access.redhat.com/security/cve/CVE-2019-8586\nhttps://access.redhat.com/security/cve/CVE-2019-8587\nhttps://access.redhat.com/security/cve/CVE-2019-8594\nhttps://access.redhat.com/security/cve/CVE-2019-8595\nhttps://access.redhat.com/security/cve/CVE-2019-8596\nhttps://access.redhat.com/security/cve/CVE-2019-8597\nhttps://access.redhat.com/security/cve/CVE-2019-8601\nhttps://access.redhat.com/security/cve/CVE-2019-8607\nhttps://access.redhat.com/security/cve/CVE-2019-8608\nhttps://access.redhat.com/security/cve/CVE-2019-8609\nhttps://access.redhat.com/security/cve/CVE-2019-8610\nhttps://access.redhat.com/security/cve/CVE-2019-8611\nhttps://access.redhat.com/security/cve/CVE-2019-8615\nhttps://access.redhat.com/security/cve/CVE-2019-8619\nhttps://access.redhat.com/security/cve/CVE-2019-8622\nhttps://access.redhat.com/security/cve/CVE-2019-8623\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8644\nhttps://access.redhat.com/security/cve/CVE-2019-8649\nhttps://access.redhat.com/security/cve/CVE-2019-8658\nhttps://access.redhat.com/security/cve/CVE-2019-8666\nhttps://access.redhat.com/security/cve/CVE-2019-8669\nhttps://access.redhat.com/security/cve/CVE-2019-8671\nhttps://access.redhat.com/security/cve/CVE-2019-8672\nhttps://access.redhat.com/security/cve/CVE-2019-8673\nhttps://access.redhat.com/security/cve/CVE-2019-8674\nhttps://access.redhat.com/security/cve/CVE-2019-8676\nhttps://access.redhat.com/security/cve/CVE-2019-8677\nhttps://access.redhat.com/security/cve/CVE-2019-8678\nhttps://access.redhat.com/security/cve/CVE-2019-8679\nhttps://access.redhat.com/security/cve/CVE-2019-8680\nhttps://access.redhat.com/security/cve/CVE-2019-8681\nhttps://access.redhat.com/security/cve/CVE-2019-8683\nhttps://access.redhat.com/security/cve/CVE-2019-8684\nhttps://access.redhat.com/security/cve/CVE-2019-8686\nhttps://access.redhat.com/security/cve/CVE-2019-8687\nhttps://access.redhat.com/security/cve/CVE-2019-8688\nhttps://access.redhat.com/security/cve/CVE-2019-8689\nhttps://access.redhat.com/security/cve/CVE-2019-8690\nhttps://access.redhat.com/security/cve/CVE-2019-8707\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8719\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8726\nhttps://access.redhat.com/security/cve/CVE-2019-8733\nhttps://access.redhat.com/security/cve/CVE-2019-8735\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8763\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8765\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8768\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8821\nhttps://access.redhat.com/security/cve/CVE-2019-8822\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-11070\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2020 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBX3OjINzjgjWX9erEAQjqsg/9FnSEJ3umFx0gtnsZIVRP9YxMIVZhVQ8z\nrNnK/LGQWq1nPlNC5OF60WRcWA7cC74lh1jl/+xU6p+9JXTq9y9hQTd7Fcf+6T01\nRYj2zJe6kGBY/53rhZJKCdb9zNXz1CkqsuvTPqVGIabUWTTlsBFnd6l4GK6QL4kM\nXVQZyWtmSfmLII4Ocdav9WocJzH6o1TbEo+O9Fm6WjdVOK+/+VzPki0/dW50CQAK\nR8u5tTXZR5m52RLmvhs/LTv3yUnmhEkhvrR0TtuR8KRfcP1/ytNwn3VidFefuAO1\nPWrgpjIPWy/kbtZaZWK4fBblYj6bKCVD1SiBKQcOfCq0f16aqRP2niFoDXdAy467\neGu0JHkRsIRCLG2rY+JfOau5KtLRhRr0iRe5AhOVpAtUelzjAvEQEcVv4GmZXcwX\nrXfeagSjWzdo8Mf55d7pjORXAKhGdO3FQSeiCvzq9miZq3NBX4Jm4raobeskw/rJ\n1ONqg4fE7Gv7rks8QOy5xErwI8Ut1TGJAgYOD8rmRptr05hBWQFJCfmoc4KpxsMe\nPJoRag0AZfYxYoMe5avMcGCYHosU63z3wS7gao9flj37NkEi6M134vGmCpPNmpGr\nw5HQly9SO3mD0a92xOUn42rrXq841ZkVu89fR6j9wBn8NAKLWH6eUjZkVMNmLRzh\nPKg+HFNkMjk=dS3G\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://www.redhat.com/mailman/listinfo/rhsa-announce\n. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-29-8 Additional information\nfor APPLE-SA-2019-9-26-5 watchOS 6\n\nwatchOS 6 addresses the following:\n\nAudio\nAvailable for: Apple Watch Series 3 and later\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2019-8753: \u0141ukasz Pilorz of Standard Chartered GBS Poland\nEntry added October 29, 2019\n\nCoreAudio\nAvailable for: Apple Watch Series 3 and later\nImpact: Processing a maliciously crafted movie may result in the\ndisclosure of process memory\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2019-8705: riusksk of VulWar Corp working with Trend Micro\u0027s Zero\nDay Initiative\nEntry added October 29, 2019\n\nCoreCrypto\nAvailable for: Apple Watch Series 3 and later\nImpact: Processing a large input may lead to a denial of service\nDescription: A denial of service issue was addressed with improved\ninput validation. \nCVE-2019-8741: Nicky Mouha of NIST\nEntry added October 29, 2019\n\nFoundation\nAvailable for: Apple Watch Series 3 and later\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2019-8641: Samuel Gro\u00df and Natalie Silvanovich of Google Project\nZero\nCVE-2019-8746: Natalie Silvanovich and Samuel Gro\u00df of Google Project\nZero\nEntry added October 29, 2019\n\nIOUSBDeviceFamily\nAvailable for: Apple Watch Series 3 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8718: Joshua Hill and Sem Voigtl\u00e4nder\nEntry added October 29, 2019\n\nKernel\nAvailable for: Apple Watch Series 3 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption vulnerability was addressed with\nimproved locking. \nCVE-2019-8740: Mohamed Ghannam (@_simo36)\nEntry added October 29, 2019\n\nKernel\nAvailable for: Apple Watch Series 3 and later\nImpact: A local app may be able to read a persistent account\nidentifier\nDescription: A validation issue was addressed with improved logic. \nCVE-2019-8809: Apple\nEntry added October 29, 2019\n\nKernel\nAvailable for: Apple Watch Series 3 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8717: Jann Horn of Google Project Zero\nEntry added October 29, 2019\n\nKernel\nAvailable for: Apple Watch Series 3 and later\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8712: Mohamed Ghannam (@_simo36)\nEntry added October 29, 2019\n\nKernel\nAvailable for: Apple Watch Series 3 and later\nImpact: A malicious application may be able to determine kernel\nmemory layout\nDescription: A memory corruption issue existed in the handling of\nIPv6 packets. \nCVE-2019-8744: Zhuo Liang of Qihoo 360 Vulcan Team\nEntry added October 29, 2019\n\nKernel\nAvailable for: Apple Watch Series 3 and later\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2019-8749: found by OSS-Fuzz\nCVE-2019-8756: found by OSS-Fuzz\nEntry added October 29, 2019\n\nmDNSResponder\nAvailable for: Apple Watch Series 3 and later\nImpact: An attacker in physical proximity may be able to passively\nobserve device names in AWDL communications\nDescription: This issue was resolved by replacing device names with a\nrandom identifier. \nCVE-2019-8799: David Kreitschmann and Milan Stute of Secure Mobile\nNetworking Lab at Technische Universit\u00e4t Darmstadt\nEntry added October 29, 2019\n\nUIFoundation\nAvailable for: Apple Watch Series 3 and later\nImpact: Processing a maliciously crafted text file may lead to\narbitrary code execution\nDescription: A buffer overflow was addressed with improved bounds\nchecking. \nCVE-2019-8710: found by OSS-Fuzz\nCVE-2019-8728: Junho Jang of LINE Security Team and Hanul Choi of\nABLY Corporation\nCVE-2019-8734: found by OSS-Fuzz\nCVE-2019-8751: Dongzhuo Zhao working with ADLab of Venustech\nCVE-2019-8752: Dongzhuo Zhao working with ADLab of Venustech\nCVE-2019-8773: found by OSS-Fuzz\n\nAdditional recognition\n\nAudio\nWe would like to acknowledge riusksk of VulWar Corp working with\nTrend Micro\u0027s Zero Day Initiative for their assistance. \nEntry added October 29, 2019\n\nboringssl\nWe would like to acknowledge Thijs Alkemade (@xnyhps) of Computest\nfor their assistance. \n\nHomeKit\nWe would like to acknowledge Tian Zhang for their assistance. \n\nKernel\nWe would like to acknowledge Brandon Azad of Google Project Zero for\ntheir assistance. \n\nmDNSResponder\nWe would like to acknowledge Gregor Lang of e.solutions GmbH for\ntheir assistance. \n\nProfiles\nWe would like to acknowledge Erik Johnson of Vernon Hills High School\nand James Seeley (@Code4iOS) of Shriver Job Corps for their\nassistance. \n\nSafari\nWe would like to acknowledge Yi\u011fit Can YILMAZ (@yilmazcanyigit) of\nTurkishKit for their assistance. \n\nWebKit\nWe would like to acknowledge MinJeong Kim of Information Security\nLab, Chungnam National University, JaeCheol Ryou of the Information\nSecurity Lab, Chungnam National University in South Korea and cc\nworking with Trend Micro\u0027s Zero Day Initiative for their assistance. \n\nInstallation note:\n\nInstructions on how to update your Apple Watch software are\navailable at https://support.apple.com/kb/HT204641\n\nTo check the version on your Apple Watch, open the Apple Watch app\non your iPhone and select \"My Watch \u003e General \u003e About\". \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\"",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "155064"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8710",
        "trust": 3.3
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96749516",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "155216",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155068",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.2
      },
      {
        "db": "VULHUB",
        "id": "VHN-160145",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8710",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "155064",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "155064"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "id": "VAR-201912-1864",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:36:34.526000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "About the security content of iCloud for Windows 11.0",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210727"
      },
      {
        "title": "About the security content of iCloud for Windows 7.15",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210728"
      },
      {
        "title": "About the security content of iOS 13.2 and iPadOS 13.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210721"
      },
      {
        "title": "About the security content of Xcode 11.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210729"
      },
      {
        "title": "About the security content of macOS Catalina 10.15.1, Security Update 2019-001, and Security Update 2019-006",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210722"
      },
      {
        "title": "About the security content of tvOS 13.2",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210723"
      },
      {
        "title": "About the security content of watchOS 6.1",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210724"
      },
      {
        "title": "About the security content of Safari 13.0.3",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210725"
      },
      {
        "title": "About the security content of iTunes 12.10.2 for Windows",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT210726"
      },
      {
        "title": "Mac \u306b\u642d\u8f09\u3055\u308c\u3066\u3044\u308b macOS \u3092\u8abf\u3079\u308b",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT201260"
      },
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=112183"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210727"
      },
      {
        "trust": 1.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8788"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8803"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8815"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8766"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8735"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8789"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8804"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8816"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8775"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8793"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8805"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8710"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8819"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8782"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8794"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8807"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8743"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8820"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8783"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8795"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8811"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8747"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8821"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8784"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8797"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8812"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8750"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8822"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8785"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8798"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8813"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8764"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8823"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8786"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8802"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8814"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8765"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2019-8787"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96749516/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8750"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8785"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8797"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8786"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8798"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8787"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8802"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8788"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8803"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8804"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8775"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8789"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8805"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8793"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8807"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8794"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8747"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8784"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8795"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.6,
        "url": "https://wpewebkit.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://webkitgtk.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161536/red-hat-security-advisory-2020-5635-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168011/red-hat-security-advisory-2022-5924-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155216/webkitgtk-wpe-webkit-code-execution-xss.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210635"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4233/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/161429/red-hat-security-advisory-2021-0436-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210727"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-multiple-vulnerabilities-30975"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155068/apple-security-advisory-2019-10-29-11.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/160889/red-hat-security-advisory-2021-0050-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.4,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.2,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/bugtraq/2019/nov/12"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8753"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8706"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8773"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8717"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8728"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8744"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8734"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8718"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8746"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8745"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8752"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8751"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8756"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8705"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8741"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8740"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8799"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "155064"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "155064"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2019-11-01T17:09:58",
        "db": "PACKETSTORM",
        "id": "155064"
      },
      {
        "date": "2019-11-01T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "date": "2019-11-01T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "date": "2019-12-18T18:15:36.257000",
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160145"
      },
      {
        "date": "2021-11-30T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8710"
      },
      {
        "date": "2022-08-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      },
      {
        "date": "2020-01-07T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      },
      {
        "date": "2024-11-21T04:50:20.487000",
        "db": "NVD",
        "id": "CVE-2019-8710"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Updates to product vulnerabilities",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2019-011304"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201911-017"
      }
    ],
    "trust": 0.6
  }
}

VAR-201912-1853

Vulnerability from variot - Updated: 2025-12-22 22:35

Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes prior to 12.10.2. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601) An out-of-bounds read was addressed with improved input validation. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. (CVE-2019-8689) A logic issue existed in the handling of document loads. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885) A race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901) An input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:

Download the release images via:

quay.io/redhat/quay:v3.3.3 quay.io/redhat/clair-jwt:v3.3.3 quay.io/redhat/quay-builder:v3.3.3 quay.io/redhat/clair:v3.3.3

  1. Bugs fixed (https://bugzilla.redhat.com/):

1905758 - CVE-2020-27831 quay: email notifications authorization bypass 1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display

  1. JIRA issues fixed (https://issues.jboss.org/):

PROJQUAY-1124 - NVD feed is broken for latest Clair v2 version

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1732329 - Virtual Machine is missing documentation of its properties in yaml editor 1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv 1791753 - [RFE] [SSP] Template validator should check validations in template's parent template 1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic 1848954 - KMP missing CA extensions in cabundle of mutatingwebhookconfiguration 1848956 - KMP requires downtime for CA stabilization during certificate rotation 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1853911 - VM with dot in network name fails to start with unclear message 1854098 - NodeNetworkState on workers doesn't have "status" key due to nmstate-handler pod failure to run "nmstatectl show" 1856347 - SR-IOV : Missing network name for sriov during vm setup 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1859235 - Common Templates - after upgrade there are 2 common templates per each os-workload-flavor combination 1860714 - No API information from oc explain 1860992 - CNV upgrade - users are not removed from privileged SecurityContextConstraints 1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem 1866593 - CDI is not handling vm disk clone 1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs 1868817 - Container-native Virtualization 2.6.0 Images 1873771 - Improve the VMCreationFailed error message caused by VM low memory 1874812 - SR-IOV: Guest Agent expose link-local ipv6 address for sometime and then remove it 1878499 - DV import doesn't recover from scratch space PVC deletion 1879108 - Inconsistent naming of "oc virt" command in help text 1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running 1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message 1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used 1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, before the NodeNetworkConfigurationPolicy is applied 1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. ------------------------------------------------------------------------ WebKitGTK and WPE WebKit Security Advisory WSA-2019-0006


Date reported : November 08, 2019 Advisory ID : WSA-2019-0006 WebKitGTK Advisory URL : https://webkitgtk.org/security/WSA-2019-0006.html WPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2019-0006.html CVE identifiers : CVE-2019-8710, CVE-2019-8743, CVE-2019-8764, CVE-2019-8765, CVE-2019-8766, CVE-2019-8782, CVE-2019-8783, CVE-2019-8808, CVE-2019-8811, CVE-2019-8812, CVE-2019-8813, CVE-2019-8814, CVE-2019-8815, CVE-2019-8816, CVE-2019-8819, CVE-2019-8820, CVE-2019-8821, CVE-2019-8822, CVE-2019-8823.

Several vulnerabilities were discovered in WebKitGTK and WPE WebKit.

CVE-2019-8710 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to found by OSS-Fuzz.

CVE-2019-8743 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to zhunki from Codesafe Team of Legendsec at Qi'anxin Group.

CVE-2019-8764 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to Sergei Glazunov of Google Project Zero.

CVE-2019-8765 Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before 2.24.3. Credit to Samuel Groß of Google Project Zero.

CVE-2019-8766 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to found by OSS-Fuzz.

CVE-2019-8782 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to Cheolung Lee of LINE+ Security Team.

CVE-2019-8783 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Cheolung Lee of LINE+ Graylab Security Team.

CVE-2019-8808 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to found by OSS-Fuzz.

CVE-2019-8811 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Soyeon Park of SSLab at Georgia Tech.

CVE-2019-8812 Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before 2.26.2. Credit to an anonymous researcher.

CVE-2019-8813 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to an anonymous researcher.

CVE-2019-8814 Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before 2.26.2. Credit to Cheolung Lee of LINE+ Security Team.

CVE-2019-8815 Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before 2.26.0. Credit to Apple.

CVE-2019-8816 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Soyeon Park of SSLab at Georgia Tech.

CVE-2019-8819 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Cheolung Lee of LINE+ Security Team.

CVE-2019-8820 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Samuel Groß of Google Project Zero.

CVE-2019-8821 Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before 2.24.3. Credit to Sergei Glazunov of Google Project Zero.

CVE-2019-8822 Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before 2.24.3. Credit to Sergei Glazunov of Google Project Zero.

CVE-2019-8823 Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before 2.26.1. Credit to Sergei Glazunov of Google Project Zero.

We recommend updating to the latest stable versions of WebKitGTK and WPE WebKit. It is the best way to ensure that you are running safe versions of WebKit. Please check our websites for information about the latest stable releases.

Further information about WebKitGTK and WPE WebKit security advisories can be found at: https://webkitgtk.org/security.html or https://wpewebkit.org/security/.

The WebKitGTK and WPE WebKit team, November 08, 2019

. Bugs fixed (https://bugzilla.redhat.com/):

1823765 - nfd-workers crash under an ipv6 environment 1838802 - mysql8 connector from operatorhub does not work with metering operator 1838845 - Metering operator can't connect to postgres DB from Operator Hub 1841883 - namespace-persistentvolumeclaim-usage query returns unexpected values 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1868294 - NFD operator does not allow customisation of nfd-worker.conf 1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration 1890672 - NFD is missing a build flag to build correctly 1890741 - path to the CA trust bundle ConfigMap is broken in report operator 1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster 1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel 1900125 - FIPS error while generating RSA private key for CA 1906129 - OCP 4.7: Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub 1908492 - OCP 4.7: Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub 1913837 - The CI and ART 4.7 metering images are not mirrored 1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le 1916010 - olm skip range is set to the wrong range 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1923998 - NFD Operator is failing to update and remains in Replacing state


  1. Gentoo Linux Security Advisory GLSA 202003-22

                                       https://security.gentoo.org/

Severity: Normal Title: WebkitGTK+: Multiple vulnerabilities Date: March 15, 2020 Bugs: #699156, #706374, #709612 ID: 202003-22


Synopsis

Multiple vulnerabilities have been found in WebKitGTK+, the worst of which may lead to arbitrary code execution.

Background

WebKitGTK+ is a full-featured port of the WebKit rendering engine, suitable for projects requiring any kind of web integration, from hybrid HTML/CSS applications to full-fledged web browsers.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.26.4 >= 2.26.4

Description

Multiple vulnerabilities have been discovered in WebKitGTK+. Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.26.4"

References

[ 1 ] CVE-2019-8625 https://nvd.nist.gov/vuln/detail/CVE-2019-8625 [ 2 ] CVE-2019-8674 https://nvd.nist.gov/vuln/detail/CVE-2019-8674 [ 3 ] CVE-2019-8707 https://nvd.nist.gov/vuln/detail/CVE-2019-8707 [ 4 ] CVE-2019-8710 https://nvd.nist.gov/vuln/detail/CVE-2019-8710 [ 5 ] CVE-2019-8719 https://nvd.nist.gov/vuln/detail/CVE-2019-8719 [ 6 ] CVE-2019-8720 https://nvd.nist.gov/vuln/detail/CVE-2019-8720 [ 7 ] CVE-2019-8726 https://nvd.nist.gov/vuln/detail/CVE-2019-8726 [ 8 ] CVE-2019-8733 https://nvd.nist.gov/vuln/detail/CVE-2019-8733 [ 9 ] CVE-2019-8735 https://nvd.nist.gov/vuln/detail/CVE-2019-8735 [ 10 ] CVE-2019-8743 https://nvd.nist.gov/vuln/detail/CVE-2019-8743 [ 11 ] CVE-2019-8763 https://nvd.nist.gov/vuln/detail/CVE-2019-8763 [ 12 ] CVE-2019-8764 https://nvd.nist.gov/vuln/detail/CVE-2019-8764 [ 13 ] CVE-2019-8765 https://nvd.nist.gov/vuln/detail/CVE-2019-8765 [ 14 ] CVE-2019-8766 https://nvd.nist.gov/vuln/detail/CVE-2019-8766 [ 15 ] CVE-2019-8768 https://nvd.nist.gov/vuln/detail/CVE-2019-8768 [ 16 ] CVE-2019-8769 https://nvd.nist.gov/vuln/detail/CVE-2019-8769 [ 17 ] CVE-2019-8771 https://nvd.nist.gov/vuln/detail/CVE-2019-8771 [ 18 ] CVE-2019-8782 https://nvd.nist.gov/vuln/detail/CVE-2019-8782 [ 19 ] CVE-2019-8783 https://nvd.nist.gov/vuln/detail/CVE-2019-8783 [ 20 ] CVE-2019-8808 https://nvd.nist.gov/vuln/detail/CVE-2019-8808 [ 21 ] CVE-2019-8811 https://nvd.nist.gov/vuln/detail/CVE-2019-8811 [ 22 ] CVE-2019-8812 https://nvd.nist.gov/vuln/detail/CVE-2019-8812 [ 23 ] CVE-2019-8813 https://nvd.nist.gov/vuln/detail/CVE-2019-8813 [ 24 ] CVE-2019-8814 https://nvd.nist.gov/vuln/detail/CVE-2019-8814 [ 25 ] CVE-2019-8815 https://nvd.nist.gov/vuln/detail/CVE-2019-8815 [ 26 ] CVE-2019-8816 https://nvd.nist.gov/vuln/detail/CVE-2019-8816 [ 27 ] CVE-2019-8819 https://nvd.nist.gov/vuln/detail/CVE-2019-8819 [ 28 ] CVE-2019-8820 https://nvd.nist.gov/vuln/detail/CVE-2019-8820 [ 29 ] CVE-2019-8821 https://nvd.nist.gov/vuln/detail/CVE-2019-8821 [ 30 ] CVE-2019-8822 https://nvd.nist.gov/vuln/detail/CVE-2019-8822 [ 31 ] CVE-2019-8823 https://nvd.nist.gov/vuln/detail/CVE-2019-8823 [ 32 ] CVE-2019-8835 https://nvd.nist.gov/vuln/detail/CVE-2019-8835 [ 33 ] CVE-2019-8844 https://nvd.nist.gov/vuln/detail/CVE-2019-8844 [ 34 ] CVE-2019-8846 https://nvd.nist.gov/vuln/detail/CVE-2019-8846 [ 35 ] CVE-2020-3862 https://nvd.nist.gov/vuln/detail/CVE-2020-3862 [ 36 ] CVE-2020-3864 https://nvd.nist.gov/vuln/detail/CVE-2020-3864 [ 37 ] CVE-2020-3865 https://nvd.nist.gov/vuln/detail/CVE-2020-3865 [ 38 ] CVE-2020-3867 https://nvd.nist.gov/vuln/detail/CVE-2020-3867 [ 39 ] CVE-2020-3868 https://nvd.nist.gov/vuln/detail/CVE-2020-3868

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202003-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2020 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2019-10-29-1 iOS 13.2 and iPadOS 13.2

iOS 13.2 and iPadOS 13.2 are now available and address the following:

Accounts Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: A remote attacker may be able to leak memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at Technische Universität Darmstadt

App Store Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: A local attacker may be able to login to the account of a previously logged in user without valid credentials. CVE-2019-8803: Kiyeon An, 차민규 (CHA Minkyu)

Associated Domains Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: Improper URL processing may lead to data exfiltration Description: An issue existed in the parsing of URLs. CVE-2019-8788: Juha Lindstedt of Pakastin, Mirko Tanania, Rauli Rikama of Zero Keyboard Ltd

Audio Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8785: Ian Beer of Google Project Zero CVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure

AVEVideoEncoder Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure

Books Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: Parsing a maliciously crafted iBooks file may lead to disclosure of user information Description: A validation issue existed in the handling of symlinks. CVE-2019-8789: Gertjan Franken of imec-DistriNet, KU Leuven

Contacts Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: Processing a maliciously contact may lead to UI spoofing Description: An inconsistent user interface issue was addressed with improved state management. CVE-2017-7152: Oliver Paukstadt of Thinking Objects GmbH (to.com)

File System Events Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8798: ABC Research s.r.o. working with Trend Micro's Zero Day Initiative

Graphics Driver Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to execute arbitrary code with system privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8784: Vasiliy Vasilyev and Ilya Finogeev of Webinar, LLC

Kernel Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to read restricted memory Description: A validation issue was addressed with improved input sanitization. CVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure

Kernel Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2019-8786: an anonymous researcher

Screen Time Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: A local user may be able to record the screen without a visible screen recording indicator Description: A consistency issue existed in deciding when to show the screen recording indicator. CVE-2019-8793: Ryan Jenkins of Lake Forrest Prep School

Setup Assistant Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: An attacker in physical proximity may be able to force a user onto a malicious Wi-Fi network during device setup Description: An inconsistency in Wi-Fi network configuration settings was addressed. CVE-2019-8804: Christy Philip Mathew of Zimperium, Inc

WebKit Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2019-8813: an anonymous researcher

WebKit Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2019-8782: Cheolung Lee of LINE+ Security Team CVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team CVE-2019-8808: found by OSS-Fuzz CVE-2019-8811: Soyeon Park of SSLab at Georgia Tech CVE-2019-8812: an anonymous researcher CVE-2019-8814: Cheolung Lee of LINE+ Security Team CVE-2019-8816: Soyeon Park of SSLab at Georgia Tech CVE-2019-8819: Cheolung Lee of LINE+ Security Team CVE-2019-8820: Samuel Groß of Google Project Zero CVE-2019-8821: Sergei Glazunov of Google Project Zero CVE-2019-8822: Sergei Glazunov of Google Project Zero CVE-2019-8823: Sergei Glazunov of Google Project Zero

WebKit Process Model Available for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4 and later, and iPod touch 7th generation Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2019-8815: Apple

Additional recognition

CFNetwork We would like to acknowledge Lily Chen of Google for their assistance.

WebKit We would like to acknowledge Dlive of Tencent's Xuanwu Lab and Zhiyi Zhang of Codesafe Team of Legendsec at Qi'anxin Group for their assistance.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device.

The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated:

  • Navigate to Settings
  • Select General
  • Select About. The version after applying this update will be "iOS 13.2 and iPadOS 13.2"

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-201912-1853",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.0.3"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.2"
      },
      {
        "model": "icloud",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      },
      {
        "model": "icloud",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.8"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.2"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple,Red Hat,WebKitGTK+ Team,Gentoo",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2019-8782",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2019-8782",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-160217",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2019-8782",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2019-8782",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-201910-1751",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-160217",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2019-8782",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in iOS 13.2 and iPadOS 13.2, tvOS 13.2, Safari 13.0.3, iTunes for Windows 12.10.2, iCloud for Windows 11.0. Processing maliciously crafted web content may lead to arbitrary code execution. Apple Safari is a web browser that is the default browser included with the Mac OS X and iOS operating systems. Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. WebKit is one of the web browser engine components. A security vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Apple Safari prior to 13.0.3; iOS prior to 13.2; iPadOS prior to 13.2; tvOS prior to 13.2; Windows-based iCloud prior to 11.0; Windows-based iTunes prior to 12.10.2. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-8601)\nAn out-of-bounds read was addressed with improved input validation. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. This issue is fixed in watchOS 6.1. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. A file URL may be incorrectly processed. (CVE-2020-3885)\nA race condition was addressed with additional validation. An application may be able to read restricted memory. (CVE-2020-3901)\nAn input validation issue was addressed with improved input validation. (CVE-2020-3902). Solution:\n\nDownload the release images via:\n\nquay.io/redhat/quay:v3.3.3\nquay.io/redhat/clair-jwt:v3.3.3\nquay.io/redhat/quay-builder:v3.3.3\nquay.io/redhat/clair:v3.3.3\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1905758 - CVE-2020-27831 quay: email notifications authorization bypass\n1905784 - CVE-2020-27832 quay: persistent XSS in repository notification display\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nPROJQUAY-1124 - NVD feed is broken for latest Clair v2 version\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1732329 - Virtual Machine is missing documentation of its properties in yaml editor\n1783192 - Guest kernel panic when start RHEL6.10 guest with q35 machine type and virtio disk in cnv\n1791753 - [RFE] [SSP] Template validator should check validations in template\u0027s parent template\n1804533 - CVE-2020-9283 golang.org/x/crypto: Processing of crafted ssh-ed25519 public keys allows for panic\n1848954 - KMP missing CA extensions  in cabundle of mutatingwebhookconfiguration\n1848956 - KMP  requires downtime for CA stabilization during certificate rotation\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1853911 - VM with dot in network name fails to start with unclear message\n1854098 - NodeNetworkState on workers doesn\u0027t have \"status\" key due to nmstate-handler pod failure to run \"nmstatectl show\"\n1856347 - SR-IOV : Missing network name for sriov during vm setup\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1859235 - Common Templates - after upgrade there are 2  common templates per each os-workload-flavor combination\n1860714 - No API information from `oc explain`\n1860992 - CNV upgrade - users are not removed from privileged  SecurityContextConstraints\n1864577 - [v2v][RHV to CNV non migratable source VM fails to import to Ceph-rbd / File system due to overhead required for Filesystem\n1866593 - CDI is not handling vm disk clone\n1867099 - CVE-2020-16845 golang: ReadUvarint and ReadVarint can read an unlimited number of bytes from invalid inputs\n1868817 - Container-native Virtualization 2.6.0 Images\n1873771 - Improve the VMCreationFailed error message caused by VM low memory\n1874812 - SR-IOV: Guest Agent  expose link-local ipv6 address  for sometime and then remove it\n1878499 - DV import doesn\u0027t recover from scratch space PVC deletion\n1879108 - Inconsistent naming of \"oc virt\" command in help text\n1881874 - openshift-cnv namespace is getting stuck if the user tries to delete it while CNV is running\n1883232 - Webscale: kubevirt/CNV datavolume importer pod inability to disable sidecar injection if namespace has sidecar injection enabled but VM Template does NOT\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1885153 - [v2v][RHV to CNv VM import] Wrong Network mapping do not show a relevant error message\n1885418 - [openshift-cnv] issues with memory overhead calculation when limits are used\n1887398 - [openshift-cnv][CNV] nodes need to exist and be labeled first, *before* the NodeNetworkConfigurationPolicy is applied\n1889295 - [v2v][VMware to CNV VM import API] diskMappings: volumeMode Block is not passed on to PVC request. ------------------------------------------------------------------------\nWebKitGTK and WPE WebKit Security Advisory                 WSA-2019-0006\n------------------------------------------------------------------------\n\nDate reported           : November 08, 2019\nAdvisory ID             : WSA-2019-0006\nWebKitGTK Advisory URL  : https://webkitgtk.org/security/WSA-2019-0006.html\nWPE WebKit Advisory URL : https://wpewebkit.org/security/WSA-2019-0006.html\nCVE identifiers         : CVE-2019-8710, CVE-2019-8743, CVE-2019-8764,\n                          CVE-2019-8765, CVE-2019-8766, CVE-2019-8782,\n                          CVE-2019-8783, CVE-2019-8808, CVE-2019-8811,\n                          CVE-2019-8812, CVE-2019-8813, CVE-2019-8814,\n                          CVE-2019-8815, CVE-2019-8816, CVE-2019-8819,\n                          CVE-2019-8820, CVE-2019-8821, CVE-2019-8822,\n                          CVE-2019-8823. \n\nSeveral vulnerabilities were discovered in WebKitGTK and WPE WebKit. \n\nCVE-2019-8710\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to found by OSS-Fuzz. \n\nCVE-2019-8743\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to zhunki from Codesafe Team of Legendsec at Qi\u0027anxin Group. \n\nCVE-2019-8764\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to Sergei Glazunov of Google Project Zero. \n\nCVE-2019-8765\n    Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before\n    2.24.3. \n    Credit to Samuel Gro\u00df of Google Project Zero. \n\nCVE-2019-8766\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to found by OSS-Fuzz. \n\nCVE-2019-8782\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to Cheolung Lee of LINE+ Security Team. \n\nCVE-2019-8783\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Cheolung Lee of LINE+ Graylab Security Team. \n\nCVE-2019-8808\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to found by OSS-Fuzz. \n\nCVE-2019-8811\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Soyeon Park of SSLab at Georgia Tech. \n\nCVE-2019-8812\n    Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before\n    2.26.2. \n    Credit to an anonymous researcher. \n\nCVE-2019-8813\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to an anonymous researcher. \n\nCVE-2019-8814\n    Versions affected: WebKitGTK before 2.26.2 and WPE WebKit before\n    2.26.2. \n    Credit to Cheolung Lee of LINE+ Security Team. \n\nCVE-2019-8815\n    Versions affected: WebKitGTK before 2.26.0 and WPE WebKit before\n    2.26.0. \n    Credit to Apple. \n\nCVE-2019-8816\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Soyeon Park of SSLab at Georgia Tech. \n\nCVE-2019-8819\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Cheolung Lee of LINE+ Security Team. \n\nCVE-2019-8820\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Samuel Gro\u00df of Google Project Zero. \n\nCVE-2019-8821\n    Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before\n    2.24.3. \n    Credit to Sergei Glazunov of Google Project Zero. \n\nCVE-2019-8822\n    Versions affected: WebKitGTK before 2.24.4 and WPE WebKit before\n    2.24.3. \n    Credit to Sergei Glazunov of Google Project Zero. \n\nCVE-2019-8823\n    Versions affected: WebKitGTK before 2.26.1 and WPE WebKit before\n    2.26.1. \n    Credit to Sergei Glazunov of Google Project Zero. \n\n\nWe recommend updating to the latest stable versions of WebKitGTK and WPE\nWebKit. It is the best way to ensure that you are running safe versions\nof WebKit. Please check our websites for information about the latest\nstable releases. \n\nFurther information about WebKitGTK and WPE WebKit security advisories\ncan be found at: https://webkitgtk.org/security.html or\nhttps://wpewebkit.org/security/. \n\nThe WebKitGTK and WPE WebKit team,\nNovember 08, 2019\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1823765 - nfd-workers crash under an ipv6 environment\n1838802 - mysql8 connector from operatorhub does not work with metering operator\n1838845 - Metering operator can\u0027t connect to postgres DB from Operator Hub\n1841883 - namespace-persistentvolumeclaim-usage  query returns unexpected values\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1868294 - NFD operator does not allow customisation of nfd-worker.conf\n1882310 - CVE-2020-24750 jackson-databind: Serialization gadgets in com.pastdev.httpcomponents.configuration.JndiConfiguration\n1890672 - NFD is missing a build flag to build correctly\n1890741 - path to the CA trust bundle ConfigMap is broken in report operator\n1897346 - NFD worker pods not scheduler on a 3 node master/worker cluster\n1898373 - Metering operator failing upgrade from 4.4 to 4.6 channel\n1900125 - FIPS error while generating RSA private key for CA\n1906129 - OCP 4.7:  Node Feature Discovery (NFD) Operator in CrashLoopBackOff when deployed from OperatorHub\n1908492 - OCP 4.7:  Node Feature Discovery (NFD) Operator Custom Resource Definition file in olm-catalog is not in sync with the one in manifests dir leading to failed deployment from OperatorHub\n1913837 - The CI and ART 4.7 metering images are not mirrored\n1914869 - OCP 4.7 NFD - Operand configuration options for NodeFeatureDiscovery are empty, no supported image for ppc64le\n1916010 - olm skip range is set to the wrong range\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1923998 - NFD Operator is failing to update and remains in Replacing state\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202003-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: March 15, 2020\n     Bugs: #699156, #706374, #709612\n       ID: 202003-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in WebKitGTK+, the worst of\nwhich may lead to arbitrary code execution. \n\nBackground\n==========\n\nWebKitGTK+ is a full-featured port of the WebKit rendering engine,\nsuitable for projects requiring any kind of web integration, from\nhybrid HTML/CSS applications to full-fledged web browsers. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk          \u003c 2.26.4                  \u003e= 2.26.4\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in WebKitGTK+. Please\nreview the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.26.4\"\n\nReferences\n==========\n\n[  1 ] CVE-2019-8625\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8625\n[  2 ] CVE-2019-8674\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8674\n[  3 ] CVE-2019-8707\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8707\n[  4 ] CVE-2019-8710\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8710\n[  5 ] CVE-2019-8719\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8719\n[  6 ] CVE-2019-8720\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8720\n[  7 ] CVE-2019-8726\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8726\n[  8 ] CVE-2019-8733\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8733\n[  9 ] CVE-2019-8735\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8735\n[ 10 ] CVE-2019-8743\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8743\n[ 11 ] CVE-2019-8763\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8763\n[ 12 ] CVE-2019-8764\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8764\n[ 13 ] CVE-2019-8765\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8765\n[ 14 ] CVE-2019-8766\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8766\n[ 15 ] CVE-2019-8768\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8768\n[ 16 ] CVE-2019-8769\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8769\n[ 17 ] CVE-2019-8771\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8771\n[ 18 ] CVE-2019-8782\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8782\n[ 19 ] CVE-2019-8783\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8783\n[ 20 ] CVE-2019-8808\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8808\n[ 21 ] CVE-2019-8811\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8811\n[ 22 ] CVE-2019-8812\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8812\n[ 23 ] CVE-2019-8813\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8813\n[ 24 ] CVE-2019-8814\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8814\n[ 25 ] CVE-2019-8815\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8815\n[ 26 ] CVE-2019-8816\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8816\n[ 27 ] CVE-2019-8819\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8819\n[ 28 ] CVE-2019-8820\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8820\n[ 29 ] CVE-2019-8821\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8821\n[ 30 ] CVE-2019-8822\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8822\n[ 31 ] CVE-2019-8823\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8823\n[ 32 ] CVE-2019-8835\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8835\n[ 33 ] CVE-2019-8844\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8844\n[ 34 ] CVE-2019-8846\n       https://nvd.nist.gov/vuln/detail/CVE-2019-8846\n[ 35 ] CVE-2020-3862\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3862\n[ 36 ] CVE-2020-3864\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3864\n[ 37 ] CVE-2020-3865\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3865\n[ 38 ] CVE-2020-3867\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3867\n[ 39 ] CVE-2020-3868\n       https://nvd.nist.gov/vuln/detail/CVE-2020-3868\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202003-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2020 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2019-10-29-1 iOS 13.2 and iPadOS 13.2\n\niOS 13.2 and iPadOS 13.2 are now available and address the following:\n\nAccounts\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: A remote attacker may be able to leak memory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2019-8787: Steffen Klee of Secure Mobile Networking Lab at\nTechnische Universit\u00e4t Darmstadt\n\nApp Store\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: A local attacker may be able to login to the account of a\npreviously logged in user without valid credentials. \nCVE-2019-8803: Kiyeon An, \ucc28\ubbfc\uaddc (CHA Minkyu)\n\nAssociated Domains\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: Improper URL processing may lead to data exfiltration\nDescription: An issue existed in the parsing of URLs. \nCVE-2019-8788: Juha Lindstedt of Pakastin, Mirko Tanania, Rauli\nRikama of Zero Keyboard Ltd\n\nAudio\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8785: Ian Beer of Google Project Zero\nCVE-2019-8797: 08Tc3wBB working with SSD Secure Disclosure\n\nAVEVideoEncoder\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8795: 08Tc3wBB working with SSD Secure Disclosure\n\nBooks\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: Parsing a maliciously crafted iBooks file may lead to\ndisclosure of user information\nDescription: A validation issue existed in the handling of symlinks. \nCVE-2019-8789: Gertjan Franken of imec-DistriNet, KU Leuven\n\nContacts\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: Processing a maliciously contact may lead to UI spoofing\nDescription: An inconsistent user interface issue was addressed with\nimproved state management. \nCVE-2017-7152: Oliver Paukstadt of Thinking Objects GmbH (to.com)\n\nFile System Events\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8798: ABC Research s.r.o. working with Trend Micro\u0027s Zero\nDay Initiative\n\nGraphics Driver\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An application may be able to execute arbitrary code with\nsystem privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8784: Vasiliy Vasilyev and Ilya Finogeev of Webinar, LLC\n\nKernel\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An application may be able to read restricted memory\nDescription: A validation issue was addressed with improved input\nsanitization. \nCVE-2019-8794: 08Tc3wBB working with SSD Secure Disclosure\n\nKernel\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2019-8786: an anonymous researcher\n\nScreen Time\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: A local user may be able to record the screen without a\nvisible screen recording indicator\nDescription: A consistency issue existed in deciding when to show the\nscreen recording indicator. \nCVE-2019-8793: Ryan Jenkins of Lake Forrest Prep School\n\nSetup Assistant\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: An attacker in physical proximity may be able to force a user\nonto a malicious Wi-Fi network during device setup\nDescription: An inconsistency in Wi-Fi network configuration settings\nwas addressed. \nCVE-2019-8804: Christy Philip Mathew of Zimperium, Inc\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2019-8813: an anonymous researcher\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2019-8782: Cheolung Lee of LINE+ Security Team\nCVE-2019-8783: Cheolung Lee of LINE+ Graylab Security Team\nCVE-2019-8808: found by OSS-Fuzz\nCVE-2019-8811: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8812: an anonymous researcher\nCVE-2019-8814: Cheolung Lee of LINE+ Security Team\nCVE-2019-8816: Soyeon Park of SSLab at Georgia Tech\nCVE-2019-8819: Cheolung Lee of LINE+ Security Team\nCVE-2019-8820: Samuel Gro\u00df of Google Project Zero\nCVE-2019-8821: Sergei Glazunov of Google Project Zero\nCVE-2019-8822: Sergei Glazunov of Google Project Zero\nCVE-2019-8823: Sergei Glazunov of Google Project Zero\n\nWebKit Process Model\nAvailable for: iPhone 6s and later, iPad Air 2 and later, iPad mini 4\nand later, and iPod touch 7th generation\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2019-8815: Apple\n\nAdditional recognition\n\nCFNetwork\nWe would like to acknowledge Lily Chen of Google for their\nassistance. \n\nWebKit\nWe would like to acknowledge Dlive of Tencent\u0027s Xuanwu Lab and Zhiyi\nZhang of Codesafe Team of Legendsec at Qi\u0027anxin Group for their\nassistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \n\nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n\n* Navigate to Settings\n* Select General\n* Select About. The version after applying this update\nwill be \"iOS 13.2 and iPadOS 13.2\"",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      },
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155058"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2019-8782",
        "trust": 2.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155069",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.7
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "155216",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "156742",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1549",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4012",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4456",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2019.4233",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0382",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "155059",
        "trust": 0.1
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-14246",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-160217",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8782",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161742",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161536",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "155058",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155058"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "id": "VAR-201912-1853",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:35:31.618000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Multiple Apple product WebKit Fix for component buffer error vulnerability",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=105857"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-119",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://security.gentoo.org/glsa/202003-22"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210721"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210723"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210725"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210726"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht210727"
      },
      {
        "trust": 1.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.7,
        "url": "https://wpewebkit.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.7,
        "url": "https://webkitgtk.org/security/wsa-2019-0006.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-au/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://www.suse.com/support/update/announcement/2019/suse-su-20193044-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210725"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht210727"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155069/apple-security-advisory-2019-10-29-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1549/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/159816/red-hat-security-advisory-2020-4451-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0382"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/155216/webkitgtk-wpe-webkit-code-execution-xss.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4456/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/156742/gentoo-linux-security-advisory-202003-22.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4012/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2019.4233/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166279/red-hat-security-advisory-2022-0056-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-30746"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8821"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8822"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8786"
      },
      {
        "trust": 0.2,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8787"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8794"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8798"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8797"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8803"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8795"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8785"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/fulldisclosure/2019/oct/50"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27831"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-6829"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12403"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14351"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12400"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0799"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9283"
      },
      {
        "trust": 0.1,
        "url": "https://wpewebkit.org/security/."
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhea-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5635"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24750"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8784"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8788"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8789"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-7152"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8804"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8793"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155058"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "db": "PACKETSTORM",
        "id": "155058"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      },
      {
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "date": "2019-12-18T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "date": "2021-01-11T16:29:48",
        "db": "PACKETSTORM",
        "id": "160889"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2019-11-01T17:11:43",
        "db": "PACKETSTORM",
        "id": "155069"
      },
      {
        "date": "2021-03-10T16:02:43",
        "db": "PACKETSTORM",
        "id": "161742"
      },
      {
        "date": "2019-11-08T15:45:31",
        "db": "PACKETSTORM",
        "id": "155216"
      },
      {
        "date": "2021-02-25T15:26:54",
        "db": "PACKETSTORM",
        "id": "161536"
      },
      {
        "date": "2020-03-15T14:00:23",
        "db": "PACKETSTORM",
        "id": "156742"
      },
      {
        "date": "2019-11-01T17:05:53",
        "db": "PACKETSTORM",
        "id": "155058"
      },
      {
        "date": "2019-10-30T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      },
      {
        "date": "2019-12-18T18:15:40.617000",
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-03-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-160217"
      },
      {
        "date": "2021-12-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2019-8782"
      },
      {
        "date": "2022-03-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      },
      {
        "date": "2024-11-21T04:50:27.820000",
        "db": "NVD",
        "id": "CVE-2019-8782"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple Apple product WebKit Component Buffer Error Vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-201910-1751"
      }
    ],
    "trust": 0.6
  }
}

VAR-202108-2087

Vulnerability from variot - Updated: 2025-12-22 22:34

A logic issue was addressed with improved restrictions. This issue is fixed in macOS Monterey 12.0.1, iOS 15.1 and iPadOS 15.1, watchOS 8.1, tvOS 15.1. Processing maliciously crafted web content may lead to unexpectedly unenforced Content Security Policy. apple's iPadOS Unspecified vulnerabilities exist in products from multiple vendors.Information may be tampered with. (CVE-2020-27918) "Clear History and Website Data" did not clear the history. A user may be unable to fully delete browsing history. This issue is fixed in macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave. (CVE-2021-1789) A port redirection issue was found in WebKitGTK and WPE WebKit in versions prior to 2.30.6. A malicious website may be able to access restricted ports on arbitrary servers. The highest threat from this vulnerability is to data integrity. 14610.4.3.1.7 and 15610.4.3.1.7), watchOS 7.3.2, macOS Big Sur 11.2.3. A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-1870) A use-after-free vulnerability exists in the way certain events are processed for ImageLoader objects of Webkit WebKitGTK 2.30.4. In order to trigger the vulnerability, a victim must be tricked into visiting a malicious webpage. (CVE-2021-21775) A use-after-free vulnerability exists in the way Webkit's GraphicsContext handles certain events in WebKitGTK 2.30.4. A victim must be tricked into visiting a malicious web page to trigger this vulnerability. (CVE-2021-21779) An exploitable use-after-free vulnerability exists in WebKitGTK browser version 2.30.3 x64. A specially crafted HTML web page can cause a use-after-free condition, resulting in remote code execution. The victim needs to visit a malicious web site to trigger the vulnerability. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. A malicious application may be able to leak sensitive user information. A malicious website may be able to access restricted ports on arbitrary servers. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30799) A use-after-free flaw was found in WebKitGTK. (CVE-2021-30809) A confusion type flaw was found in WebKitGTK. (CVE-2021-30818) An out-of-bounds read flaw was found in WebKitGTK. A specially crafted audio file could use this flaw to trigger a disclosure of memory when processed. (CVE-2021-30887) An information leak flaw was found in WebKitGTK. (CVE-2021-30888) A buffer overflow flaw was found in WebKitGTK. (CVE-2021-30984) ** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was in a CNA pool that was not assigned to any issues during 2021. Notes: none. (CVE-2021-32912) BubblewrapLauncher.cpp in WebKitGTK and WPE WebKit prior to 2.34.1 allows a limited sandbox bypass that allows a sandboxed process to trick host processes into thinking the sandboxed process is not confined by the sandbox, by abusing VFS syscalls that manipulate its filesystem namespace. The impact is limited to host services that create UNIX sockets that WebKit mounts inside its sandbox, and the sandboxed process remains otherwise confined. NOTE: this is similar to CVE-2021-41133. (CVE-2021-42762) A segmentation violation vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45481) A use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45482) A use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. Video self-preview in a webRTC call may be interrupted if the user answers a phone call. (CVE-2022-26719) In WebKitGTK up to and including 2.36.0 (and WPE WebKit), there is a heap-based buffer overflow in WebCore::TextureMapperLayer::setContentsLayer in WebCore/platform/graphics/texmap/TextureMapperLayer.cpp. An app may be able to disclose kernel memory. Visiting a website that frames malicious content may lead to UI spoofing. Visiting a malicious website may lead to user interface spoofing. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. (CVE-2022-46700) A flaw was found in the WebKitGTK package. An improper input validation issue may lead to a use-after-free vulnerability. This flaw allows attackers with network access to pass specially crafted web content files, causing a denial of service or arbitrary code execution. This may, in theory, allow a remote malicious user to create a specially crafted web page, trick the victim into opening it, trigger type confusion, and execute arbitrary code on the target system. (CVE-2023-23529) A use-after-free vulnerability in WebCore::RenderLayer::addChild in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25358) A use-after-free vulnerability in WebCore::RenderLayer::renderer in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25360) A use-after-free vulnerability in WebCore::RenderLayer::setNextSibling in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25361) A use-after-free vulnerability in WebCore::RenderLayer::repaintBlockSelectionGaps in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25362) A use-after-free vulnerability in WebCore::RenderLayer::updateDescendantDependentFlags in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25363) The vulnerability allows a remote malicious user to bypass Same Origin Policy restrictions. (CVE-2023-27932) The vulnerability exists due to excessive data output by the application. A remote attacker can track sensitive user information. Apple is aware of a report that this issue may have been actively exploited. (CVE-2023-32373) N/A (CVE-2023-32409). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: webkit2gtk3 security, bug fix, and enhancement update Advisory ID: RHSA-2022:1777-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1777 Issue date: 2022-05-10 CVE Names: CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 =====================================================================

  1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

The following packages have been upgraded to a later upstream version: webkit2gtk3 (2.34.6).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.34.6-1.el8.src.rpm

aarch64: webkit2gtk3-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm

ppc64le: webkit2gtk3-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm

s390x: webkit2gtk3-2.34.6-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm

x86_64: webkit2gtk3-2.34.6-1.el8.i686.rpm webkit2gtk3-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm webkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-30809 https://access.redhat.com/security/cve/CVE-2021-30818 https://access.redhat.com/security/cve/CVE-2021-30823 https://access.redhat.com/security/cve/CVE-2021-30836 https://access.redhat.com/security/cve/CVE-2021-30846 https://access.redhat.com/security/cve/CVE-2021-30848 https://access.redhat.com/security/cve/CVE-2021-30849 https://access.redhat.com/security/cve/CVE-2021-30851 https://access.redhat.com/security/cve/CVE-2021-30884 https://access.redhat.com/security/cve/CVE-2021-30887 https://access.redhat.com/security/cve/CVE-2021-30888 https://access.redhat.com/security/cve/CVE-2021-30889 https://access.redhat.com/security/cve/CVE-2021-30890 https://access.redhat.com/security/cve/CVE-2021-30897 https://access.redhat.com/security/cve/CVE-2021-30934 https://access.redhat.com/security/cve/CVE-2021-30936 https://access.redhat.com/security/cve/CVE-2021-30951 https://access.redhat.com/security/cve/CVE-2021-30952 https://access.redhat.com/security/cve/CVE-2021-30953 https://access.redhat.com/security/cve/CVE-2021-30954 https://access.redhat.com/security/cve/CVE-2021-30984 https://access.redhat.com/security/cve/CVE-2021-45481 https://access.redhat.com/security/cve/CVE-2021-45482 https://access.redhat.com/security/cve/CVE-2021-45483 https://access.redhat.com/security/cve/CVE-2022-22589 https://access.redhat.com/security/cve/CVE-2022-22590 https://access.redhat.com/security/cve/CVE-2022-22592 https://access.redhat.com/security/cve/CVE-2022-22594 https://access.redhat.com/security/cve/CVE-2022-22620 https://access.redhat.com/security/cve/CVE-2022-22637 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYnqQrdzjgjWX9erEAQi/6BAAhaqaCDj0g7uJ6LdXEng5SqGBFl5g6GIV p/WSKyL+tI3BpKaaUWr6+d4tNnaQbKxhRTwTSJa8GMrOc7n6Y7LO8Y7mQj3bEFvn z3HHLZK8EMgDUz4I0esuh0qNWnfsD/vJDuGbXlHLdNLlc5XzgX7YA6eIb1lxSbxV ueSENHohbMJLbWoeI2gMUYGb5cAzBHrgdmFIsr4XUd6sr5Z1ZOPnQPf36vrXGdzj mPXPijZtr9QiPgwijm4/DkJG7NQ4KyaPMOKauC7PEB1AHWIwHteRnVxnWuZLjpMf RqYBQu2pYeTiyGky+ozshJ81mdfLyUQBR/+4KbB2TMFZHDlhxzNFZrErh4+dfQja Cuf+IwTOSZgC/8XouTQMA27KFSYKd4PzwnB3yQeGU0NA/VngYp12BegeVHlDiadS hO+mAk/BAAesdywt7ZTU1e1yROLm/jp0VcmvkQO+gh2WhErEFV3s0qnsu1dfuLY7 B1e0z6c/vp8lkSFs2fcx0Oq1S7nGIGZiR66loghp03nDoCcxblsxBcFV9CNq6yVG BkEAFzMb/AHxqn7KbZeN6g4Los+3Dr7eFYPGUkVEXy+AbHqE+b99pT2TIjCOMh/L wXOE+nX3KXbD5MCqvmF2K6w+MfIf3AxzzgirwXyLewSP8NKBmsdBtgwbgFam1QfM Uqt+dghxtOQ= =LCNn -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

For the oldstable distribution (buster), these problems have been fixed in version 2.34.3-1~deb10u1.

For the stable distribution (bullseye), these problems have been fixed in version 2.34.3-1~deb11u1.

We recommend that you upgrade your webkit2gtk packages.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

AppKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to elevate privileges Description: A logic issue was addressed with improved state management. CVE-2021-30873: Thijs Alkemade of Computest

AppleScript Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30876: Jeremy Brown, hjy79425575 CVE-2021-30879: Jeremy Brown, hjy79425575 CVE-2021-30877: Jeremy Brown CVE-2021-30880: Jeremy Brown

Audio Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to elevate privileges Description: An integer overflow was addressed through improved input validation. CVE-2021-30907: Zweig of Kunlun Lab

Bluetooth Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved state handling. CVE-2021-30899: Weiteng Chen, Zheng Zhang, and Zhiyun Qian of UC Riverside, and Yu Wang of Didi Research America

ColorSync Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A memory corruption issue existed in the processing of ICC profiles. CVE-2021-30917: Alexandru-Vlad Niculae and Mateusz Jurczyk of Google Project Zero

Continuity Camera Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30903: an anonymous researcher

CoreAudio Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing a maliciously crafted file may disclose user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30905: Mickey Jin (@patch1t) of Trend Micro

CoreGraphics Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing a maliciously crafted PDF may lead to arbitrary code execution Description: An out-of-bounds write was addressed with improved input validation. CVE-2021-30919

FileProvider Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Unpacking a maliciously crafted archive may lead to arbitrary code execution Description: An input validation issue was addressed with improved memory handling. CVE-2021-30881: Simon Huang (@HuangShaomang) and pjf of IceSword Lab of Qihoo 360

Game Center Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to access information about a user's contacts Description: A logic issue was addressed with improved restrictions. CVE-2021-30895: Denis Tokarev

Game Center Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to read user's gameplay data Description: A logic issue was addressed with improved restrictions. CVE-2021-30896: Denis Tokarev

iCloud Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A local attacker may be able to elevate their privileges Description: This issue was addressed with improved checks. CVE-2021-30906: Cees Elzinga

Intel Graphics Driver Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved state management. CVE-2021-30824: Antonio Zekic (@antoniozekic) of Diverto

Intel Graphics Driver Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: Multiple out-of-bounds write issues were addressed with improved bounds checking. CVE-2021-30901: Zuozhi Fan (@pattern_F_) of Ant Security TianQiong Lab, Yinyi Wu (@3ndy1) of Ant Security Light-Year Lab, Jack Dates of RET2 Systems, Inc.

IOGraphics Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30821: Tim Michaud (@TimGMichaud) of Zoom Video Communications

IOMobileFrameBuffer Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30883: an anonymous researcher

Kernel Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2021-30886: @0xalsr

Kernel Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30909: Zweig of Kunlun Lab

Kernel Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30916: Zweig of Kunlun Lab

LaunchServices Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved state management. CVE-2021-30864: Ron Hass (@ronhass7) of Perception Point

Login Window Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A person with access to a host Mac may be able to bypass the Login Window in Remote Desktop for a locked instance of macOS Description: This issue was addressed with improved checks. CVE-2021-30813: Benjamin Berger of BBetterTech LLC, Peter Goedtkindt of Informatique-MTF S.A., an anonymous researcher

Model I/O Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing a maliciously crafted file may disclose user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30910: Mickey Jin (@patch1t) of Trend Micro

Model I/O Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30911: Rui Yang and Xingwei Lin of Ant Security Light-Year Lab

Sandbox Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A local attacker may be able to read sensitive information Description: A permissions issue was addressed with improved validation. CVE-2021-30920: Csaba Fitzl (@theevilbit) of Offensive Security

SMB Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2021-30868: Peter Nguyen Vu Hoang of STAR Labs

SoftwareUpdate Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may gain access to a user's Keychain items Description: The issue was addressed with improved permissions logic. CVE-2021-30912: Kirin (@Pwnrin) and chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab

SoftwareUpdate Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: An unprivileged application may be able to edit NVRAM variables Description: The issue was addressed with improved permissions logic. CVE-2021-30913: Kirin (@Pwnrin) and chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab

UIKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A person with physical access to an iOS device may be determine characteristics of a user's password in a secure text entry field Description: A logic issue was addressed with improved state management. CVE-2021-30915: Kostas Angelopoulos

WebKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: An attacker in a privileged network position may be able to bypass HSTS Description: A logic issue was addressed with improved restrictions. CVE-2021-30823: David Gullasch of Recurity Labs

WebKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing maliciously crafted web content may lead to unexpectedly unenforced Content Security Policy Description: A logic issue was addressed with improved restrictions. CVE-2021-30887: Narendra Bhati (@imnarendrabhati) of Suma Soft Pvt. Ltd.

WebKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious website using Content Security Policy reports may be able to leak information via redirect behavior Description: An information leakage issue was addressed. CVE-2021-30888: Prakash (@1lastBr3ath)

WebKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. CVE-2021-30889: Chijin Zhou of ShuiMuYuLin Ltd and Tsinghua wingtecher lab

WebKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may bypass Gatekeeper checks Description: A logic issue was addressed with improved state management. CVE-2021-30861: Wojciech Reguła (@_r3ggi), Ryan Pickren (ryanpickren.com)

WebKit Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2021-30890: an anonymous researcher

Windows Server Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A local attacker may be able to view the previous logged in user’s desktop from the fast user switching screen Description: An authentication issue was addressed with improved state management. CVE-2021-30908: ASentientBot

xar Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: Unpacking a maliciously crafted archive may allow an attacker to write arbitrary files Description: This issue was addressed with improved checks. CVE-2021-30833: Richard Warren of NCC Group

zsh Available for: Mac Pro (2013 and later), MacBook Air (Early 2015 and later), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and later), iMac (Late 2015 and later), MacBook (Early 2016 and later), iMac Pro (2017 and later) Impact: A malicious application may be able to modify protected parts of the file system Description: An inherited permissions issue was addressed with additional restrictions. CVE-2021-30892: Jonathan Bar Or of Microsoft

Additional recognition

APFS We would like to acknowledge Koh M. Nakagawa of FFRI Security, Inc. for their assistance.

App Support We would like to acknowledge an anonymous researcher, 漂亮鼠 of 赛博回忆录 for their assistance.

Bluetooth We would like to acknowledge say2 of ENKI for their assistance.

CUPS We would like to acknowledge an anonymous researcher for their assistance.

iCloud We would like to acknowledge Ryan Pickren (ryanpickren.com) for their assistance.

Kernel We would like to acknowledge Anthony Steinhauser of Google's Safeside project for their assistance.

Mail We would like to acknowledge Fabian Ising and Damian Poddebniak of Münster University of Applied Sciences for their assistance.

Managed Configuration We would like to acknowledge Michal Moravec of Logicworks, s.r.o. for their assistance.

smbx We would like to acknowledge Zhongcheng Li (CK01) for their assistance.

WebKit We would like to acknowledge Ivan Fratric of Google Project Zero, Pavel Gromadchuk, an anonymous researcher for their assistance.

Installation note: This update may be obtained from the Mac App Store

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmF4hpwACgkQeC9qKD1p rhhm0Q//fIQiOk2S9w2qirXapPqEpyI9LNJnGX/RCrsZGN/iFkgvt27/RYLhHHQk efqxE6nnXdUaj9HoIIHiG4rKxIhfkscw1dF9igvmYm6j+V2KMiRxp1Pev1zMzsBI N6F7mJ4SiATHDTJATU8uCqIqHRQsvcIrHCjovblqGfuZxzvsjkvtRc0eXC0XAARf xW0WRNbTBoCOEsMp92hNI45B/oK05b1aHm2pY529gE6GRBBl0ymVo30fQ7vmIoJY Uajc6pDNeJ1MhSpo0k+Z+eVodSdBN2EutKZfU5+4t2GzqeW5nLZFa/oqXObXBhXk i8bptOhceBu6qD9poSgkS5EdH4OdRQMcMjsQLIRJj3N/MwZBhGvsLQDlyGmtd+VG a0s+pna/WoFwzw800CYRarmL0rRsZ4zZza0iuKArhrLlQCw+ee6XNL+1U50zvMaW oT3gNkf3faCqQDxecIcQTj7xwt2tHV87p7uqELiuUZaCk5UoQBsWxGeGebFGxUq5 pJVQvnr4RVrDkpOQjbKj8w9mWoSZcvKlhRNL9J5kW75zd32vwnaVMlVkIG8vfvoK sgq/VfKrOW+EV1IMAh4iuaMiLAPjwBzMiRfjvRZFeJmTaMaTOxDKHwkG5YwPNp5W 0FlhV1S2pAmGlQZgvTxkBthtU9A9giuH+oHSGJDjr70Q7de8lJ4= =3Pcg -----END PGP SIGNATURE-----

. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202202-01


                                       https://security.gentoo.org/

Severity: High Title: WebkitGTK+: Multiple vulnerabilities Date: February 01, 2022 Bugs: #779175, #801400, #813489, #819522, #820434, #829723, #831739 ID: 202202-01


Synopsis

Multiple vulnerabilities have been found in WebkitGTK+, the worst of which could result in the arbitrary execution of code.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-libs/webkit-gtk < 2.34.4 >= 2.34.4

Description

Multiple vulnerabilities have been discovered in WebkitGTK+. Please review the CVE identifiers referenced below for details.

Workaround

There is no known workaround at this time.

Resolution

All WebkitGTK+ users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/webkit-gtk-2.34.4"

References

[ 1 ] CVE-2021-30848 https://nvd.nist.gov/vuln/detail/CVE-2021-30848 [ 2 ] CVE-2021-30888 https://nvd.nist.gov/vuln/detail/CVE-2021-30888 [ 3 ] CVE-2021-30682 https://nvd.nist.gov/vuln/detail/CVE-2021-30682 [ 4 ] CVE-2021-30889 https://nvd.nist.gov/vuln/detail/CVE-2021-30889 [ 5 ] CVE-2021-30666 https://nvd.nist.gov/vuln/detail/CVE-2021-30666 [ 6 ] CVE-2021-30665 https://nvd.nist.gov/vuln/detail/CVE-2021-30665 [ 7 ] CVE-2021-30890 https://nvd.nist.gov/vuln/detail/CVE-2021-30890 [ 8 ] CVE-2021-30661 https://nvd.nist.gov/vuln/detail/CVE-2021-30661 [ 9 ] WSA-2021-0005 https://webkitgtk.org/security/WSA-2021-0005.html [ 10 ] CVE-2021-30761 https://nvd.nist.gov/vuln/detail/CVE-2021-30761 [ 11 ] CVE-2021-30897 https://nvd.nist.gov/vuln/detail/CVE-2021-30897 [ 12 ] CVE-2021-30823 https://nvd.nist.gov/vuln/detail/CVE-2021-30823 [ 13 ] CVE-2021-30734 https://nvd.nist.gov/vuln/detail/CVE-2021-30734 [ 14 ] CVE-2021-30934 https://nvd.nist.gov/vuln/detail/CVE-2021-30934 [ 15 ] CVE-2021-1871 https://nvd.nist.gov/vuln/detail/CVE-2021-1871 [ 16 ] CVE-2021-30762 https://nvd.nist.gov/vuln/detail/CVE-2021-30762 [ 17 ] WSA-2021-0006 https://webkitgtk.org/security/WSA-2021-0006.html [ 18 ] CVE-2021-30797 https://nvd.nist.gov/vuln/detail/CVE-2021-30797 [ 19 ] CVE-2021-30936 https://nvd.nist.gov/vuln/detail/CVE-2021-30936 [ 20 ] CVE-2021-30663 https://nvd.nist.gov/vuln/detail/CVE-2021-30663 [ 21 ] CVE-2021-1825 https://nvd.nist.gov/vuln/detail/CVE-2021-1825 [ 22 ] CVE-2021-30951 https://nvd.nist.gov/vuln/detail/CVE-2021-30951 [ 23 ] CVE-2021-30952 https://nvd.nist.gov/vuln/detail/CVE-2021-30952 [ 24 ] CVE-2021-1788 https://nvd.nist.gov/vuln/detail/CVE-2021-1788 [ 25 ] CVE-2021-1820 https://nvd.nist.gov/vuln/detail/CVE-2021-1820 [ 26 ] CVE-2021-30953 https://nvd.nist.gov/vuln/detail/CVE-2021-30953 [ 27 ] CVE-2021-30749 https://nvd.nist.gov/vuln/detail/CVE-2021-30749 [ 28 ] CVE-2021-30849 https://nvd.nist.gov/vuln/detail/CVE-2021-30849 [ 29 ] CVE-2021-1826 https://nvd.nist.gov/vuln/detail/CVE-2021-1826 [ 30 ] CVE-2021-30836 https://nvd.nist.gov/vuln/detail/CVE-2021-30836 [ 31 ] CVE-2021-30954 https://nvd.nist.gov/vuln/detail/CVE-2021-30954 [ 32 ] CVE-2021-30984 https://nvd.nist.gov/vuln/detail/CVE-2021-30984 [ 33 ] CVE-2021-30851 https://nvd.nist.gov/vuln/detail/CVE-2021-30851 [ 34 ] CVE-2021-30758 https://nvd.nist.gov/vuln/detail/CVE-2021-30758 [ 35 ] CVE-2021-42762 https://nvd.nist.gov/vuln/detail/CVE-2021-42762 [ 36 ] CVE-2021-1844 https://nvd.nist.gov/vuln/detail/CVE-2021-1844 [ 37 ] CVE-2021-30689 https://nvd.nist.gov/vuln/detail/CVE-2021-30689 [ 38 ] CVE-2021-45482 https://nvd.nist.gov/vuln/detail/CVE-2021-45482 [ 39 ] CVE-2021-30858 https://nvd.nist.gov/vuln/detail/CVE-2021-30858 [ 40 ] CVE-2021-21779 https://nvd.nist.gov/vuln/detail/CVE-2021-21779 [ 41 ] WSA-2021-0004 https://webkitgtk.org/security/WSA-2021-0004.html [ 42 ] CVE-2021-30846 https://nvd.nist.gov/vuln/detail/CVE-2021-30846 [ 43 ] CVE-2021-30744 https://nvd.nist.gov/vuln/detail/CVE-2021-30744 [ 44 ] CVE-2021-30809 https://nvd.nist.gov/vuln/detail/CVE-2021-30809 [ 45 ] CVE-2021-30884 https://nvd.nist.gov/vuln/detail/CVE-2021-30884 [ 46 ] CVE-2021-30720 https://nvd.nist.gov/vuln/detail/CVE-2021-30720 [ 47 ] CVE-2021-30799 https://nvd.nist.gov/vuln/detail/CVE-2021-30799 [ 48 ] CVE-2021-30795 https://nvd.nist.gov/vuln/detail/CVE-2021-30795 [ 49 ] CVE-2021-1817 https://nvd.nist.gov/vuln/detail/CVE-2021-1817 [ 50 ] CVE-2021-21775 https://nvd.nist.gov/vuln/detail/CVE-2021-21775 [ 51 ] CVE-2021-30887 https://nvd.nist.gov/vuln/detail/CVE-2021-30887 [ 52 ] CVE-2021-21806 https://nvd.nist.gov/vuln/detail/CVE-2021-21806 [ 53 ] CVE-2021-30818 https://nvd.nist.gov/vuln/detail/CVE-2021-30818

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202202-01

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202108-2087",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.1"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.1"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.1"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.1"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.1"
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": "8.1"
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "tvos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164670"
      },
      {
        "db": "PACKETSTORM",
        "id": "164672"
      },
      {
        "db": "PACKETSTORM",
        "id": "164683"
      }
    ],
    "trust": 0.3
  },
  "cve": "CVE-2021-30887",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-30887",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "VHN-390620",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:N/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.8,
            "id": "CVE-2021-30887",
            "impactScore": 3.6,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.5,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2021-30887",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:H/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-30887",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-30887",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202108-1978",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-390620",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-30887",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A logic issue was addressed with improved restrictions. This issue is fixed in macOS Monterey 12.0.1, iOS 15.1 and iPadOS 15.1, watchOS 8.1, tvOS 15.1. Processing maliciously crafted web content may lead to unexpectedly unenforced Content Security Policy. apple\u0027s iPadOS Unspecified vulnerabilities exist in products from multiple vendors.Information may be tampered with. (CVE-2020-27918)\n\"Clear History and Website Data\" did not clear the history. A user may be unable to fully delete browsing history. This issue is fixed in macOS Big Sur 11.2, Security Update 2021-001 Catalina, Security Update 2021-001 Mojave. (CVE-2021-1789)\nA port redirection issue was found in WebKitGTK and WPE WebKit in versions prior to 2.30.6. A malicious website may be able to access restricted ports on arbitrary servers. The highest threat from this vulnerability is to data integrity. 14610.4.3.1.7 and 15610.4.3.1.7), watchOS 7.3.2, macOS Big Sur 11.2.3. A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-1870)\nA use-after-free vulnerability exists in the way certain events are processed for ImageLoader objects of Webkit WebKitGTK 2.30.4. In order to trigger the vulnerability, a victim must be tricked into visiting a malicious webpage. (CVE-2021-21775)\nA use-after-free vulnerability exists in the way Webkit\u0027s GraphicsContext handles certain events in WebKitGTK 2.30.4. A victim must be tricked into visiting a malicious web page to trigger this vulnerability. (CVE-2021-21779)\nAn exploitable use-after-free vulnerability exists in WebKitGTK browser version 2.30.3 x64. A specially crafted HTML web page can cause a use-after-free condition, resulting in remote code execution. The victim needs to visit a malicious web site to trigger the vulnerability. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. A malicious application may be able to leak sensitive user information. A malicious website may be able to access restricted ports on arbitrary servers. Apple is aware of a report that this issue may have been actively exploited.. Apple is aware of a report that this issue may have been actively exploited.. (CVE-2021-30799)\nA use-after-free flaw was found in WebKitGTK. (CVE-2021-30809)\nA confusion type flaw was found in WebKitGTK. (CVE-2021-30818)\nAn out-of-bounds read flaw was found in WebKitGTK. A specially crafted audio file could use this flaw to trigger a disclosure of memory when processed. (CVE-2021-30887)\nAn information leak flaw was found in WebKitGTK. (CVE-2021-30888)\nA buffer overflow flaw was found in WebKitGTK. (CVE-2021-30984)\n** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was in a CNA pool that was not assigned to any issues during 2021. Notes: none. (CVE-2021-32912)\nBubblewrapLauncher.cpp in WebKitGTK and WPE WebKit prior to 2.34.1 allows a limited sandbox bypass that allows a sandboxed process to trick host processes into thinking the sandboxed process is not confined by the sandbox, by abusing VFS syscalls that manipulate its filesystem namespace. The impact is limited to host services that create UNIX sockets that WebKit mounts inside its sandbox, and the sandboxed process remains otherwise confined. NOTE: this is similar to CVE-2021-41133. (CVE-2021-42762)\nA segmentation violation vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45481)\nA use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. (CVE-2021-45482)\nA use-after-free vulnerability was found in webkitgtk. An attacker with network access could pass specially crafted HTML files causing an application to halt or crash. Video self-preview in a webRTC call may be interrupted if the user answers a phone call. (CVE-2022-26719)\nIn WebKitGTK up to and including 2.36.0 (and WPE WebKit), there is a heap-based buffer overflow in WebCore::TextureMapperLayer::setContentsLayer in WebCore/platform/graphics/texmap/TextureMapperLayer.cpp. An app may be able to disclose kernel memory. Visiting a website that frames malicious content may lead to UI spoofing. Visiting a malicious website may lead to user interface spoofing. Apple is aware of a report that this issue may have been actively exploited against versions of iOS released before iOS 15.1.. (CVE-2022-46700)\nA flaw was found in the WebKitGTK package. An improper input validation issue may lead to a use-after-free vulnerability. This flaw allows attackers with network access to pass specially crafted web content files, causing a denial of service or arbitrary code execution. This may, in theory, allow a remote malicious user to create a specially crafted web page, trick the victim into opening it, trigger type confusion, and execute arbitrary code on the target system. (CVE-2023-23529)\nA use-after-free vulnerability in WebCore::RenderLayer::addChild in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25358)\nA use-after-free vulnerability in WebCore::RenderLayer::renderer in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25360)\nA use-after-free vulnerability in WebCore::RenderLayer::setNextSibling in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25361)\nA use-after-free vulnerability in WebCore::RenderLayer::repaintBlockSelectionGaps in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25362)\nA use-after-free vulnerability in WebCore::RenderLayer::updateDescendantDependentFlags in WebKitGTK prior to 2.36.8 allows malicious users to execute code remotely. (CVE-2023-25363)\nThe vulnerability allows a remote malicious user to bypass Same Origin Policy restrictions. (CVE-2023-27932)\nThe vulnerability exists due to excessive data output by the application. A remote attacker can track sensitive user information. Apple is aware of a report that this issue may have been actively exploited. (CVE-2023-32373)\nN/A (CVE-2023-32409). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: webkit2gtk3 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2022:1777-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1777\nIssue date:        2022-05-10\nCVE Names:         CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 \n                   CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 \n                   CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 \n                   CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 \n                   CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 \n                   CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 \n                   CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 \n                   CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 \n                   CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 \n                   CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 \n=====================================================================\n\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nThe following packages have been upgraded to a later upstream version:\nwebkit2gtk3 (2.34.6). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.34.6-1.el8.src.rpm\n\naarch64:\nwebkit2gtk3-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-30809\nhttps://access.redhat.com/security/cve/CVE-2021-30818\nhttps://access.redhat.com/security/cve/CVE-2021-30823\nhttps://access.redhat.com/security/cve/CVE-2021-30836\nhttps://access.redhat.com/security/cve/CVE-2021-30846\nhttps://access.redhat.com/security/cve/CVE-2021-30848\nhttps://access.redhat.com/security/cve/CVE-2021-30849\nhttps://access.redhat.com/security/cve/CVE-2021-30851\nhttps://access.redhat.com/security/cve/CVE-2021-30884\nhttps://access.redhat.com/security/cve/CVE-2021-30887\nhttps://access.redhat.com/security/cve/CVE-2021-30888\nhttps://access.redhat.com/security/cve/CVE-2021-30889\nhttps://access.redhat.com/security/cve/CVE-2021-30890\nhttps://access.redhat.com/security/cve/CVE-2021-30897\nhttps://access.redhat.com/security/cve/CVE-2021-30934\nhttps://access.redhat.com/security/cve/CVE-2021-30936\nhttps://access.redhat.com/security/cve/CVE-2021-30951\nhttps://access.redhat.com/security/cve/CVE-2021-30952\nhttps://access.redhat.com/security/cve/CVE-2021-30953\nhttps://access.redhat.com/security/cve/CVE-2021-30954\nhttps://access.redhat.com/security/cve/CVE-2021-30984\nhttps://access.redhat.com/security/cve/CVE-2021-45481\nhttps://access.redhat.com/security/cve/CVE-2021-45482\nhttps://access.redhat.com/security/cve/CVE-2021-45483\nhttps://access.redhat.com/security/cve/CVE-2022-22589\nhttps://access.redhat.com/security/cve/CVE-2022-22590\nhttps://access.redhat.com/security/cve/CVE-2022-22592\nhttps://access.redhat.com/security/cve/CVE-2022-22594\nhttps://access.redhat.com/security/cve/CVE-2022-22620\nhttps://access.redhat.com/security/cve/CVE-2022-22637\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnqQrdzjgjWX9erEAQi/6BAAhaqaCDj0g7uJ6LdXEng5SqGBFl5g6GIV\np/WSKyL+tI3BpKaaUWr6+d4tNnaQbKxhRTwTSJa8GMrOc7n6Y7LO8Y7mQj3bEFvn\nz3HHLZK8EMgDUz4I0esuh0qNWnfsD/vJDuGbXlHLdNLlc5XzgX7YA6eIb1lxSbxV\nueSENHohbMJLbWoeI2gMUYGb5cAzBHrgdmFIsr4XUd6sr5Z1ZOPnQPf36vrXGdzj\nmPXPijZtr9QiPgwijm4/DkJG7NQ4KyaPMOKauC7PEB1AHWIwHteRnVxnWuZLjpMf\nRqYBQu2pYeTiyGky+ozshJ81mdfLyUQBR/+4KbB2TMFZHDlhxzNFZrErh4+dfQja\nCuf+IwTOSZgC/8XouTQMA27KFSYKd4PzwnB3yQeGU0NA/VngYp12BegeVHlDiadS\nhO+mAk/BAAesdywt7ZTU1e1yROLm/jp0VcmvkQO+gh2WhErEFV3s0qnsu1dfuLY7\nB1e0z6c/vp8lkSFs2fcx0Oq1S7nGIGZiR66loghp03nDoCcxblsxBcFV9CNq6yVG\nBkEAFzMb/AHxqn7KbZeN6g4Los+3Dr7eFYPGUkVEXy+AbHqE+b99pT2TIjCOMh/L\nwXOE+nX3KXbD5MCqvmF2K6w+MfIf3AxzzgirwXyLewSP8NKBmsdBtgwbgFam1QfM\nUqt+dghxtOQ=\n=LCNn\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nFor the oldstable distribution (buster), these problems have been fixed\nin version 2.34.3-1~deb10u1. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 2.34.3-1~deb11u1. \n\nWe recommend that you upgrade your webkit2gtk packages. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nAppKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to elevate privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30873: Thijs Alkemade of Computest\n\nAppleScript\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30876: Jeremy Brown, hjy79425575\nCVE-2021-30879: Jeremy Brown, hjy79425575\nCVE-2021-30877: Jeremy Brown\nCVE-2021-30880: Jeremy Brown\n\nAudio\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to elevate privileges\nDescription: An integer overflow was addressed through improved input\nvalidation. \nCVE-2021-30907: Zweig of Kunlun Lab\n\nBluetooth\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved state\nhandling. \nCVE-2021-30899: Weiteng Chen, Zheng Zhang, and Zhiyun Qian of UC\nRiverside, and Yu Wang of Didi Research America\n\nColorSync\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A memory corruption issue existed in the processing of\nICC profiles. \nCVE-2021-30917: Alexandru-Vlad Niculae and Mateusz Jurczyk of Google\nProject Zero\n\nContinuity Camera\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30903: an anonymous researcher\n\nCoreAudio\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing a maliciously crafted file may disclose user\ninformation\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30905: Mickey Jin (@patch1t) of Trend Micro\n\nCoreGraphics\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing a maliciously crafted PDF may lead to arbitrary\ncode execution\nDescription: An out-of-bounds write was addressed with improved input\nvalidation. \nCVE-2021-30919\n\nFileProvider\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Unpacking a maliciously crafted archive may lead to arbitrary\ncode execution\nDescription: An input validation issue was addressed with improved\nmemory handling. \nCVE-2021-30881: Simon Huang (@HuangShaomang) and pjf of IceSword Lab\nof Qihoo 360\n\nGame Center\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to access information\nabout a user\u0027s contacts\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30895: Denis Tokarev\n\nGame Center\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to read user\u0027s gameplay\ndata\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30896: Denis Tokarev\n\niCloud\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A local attacker may be able to elevate their privileges\nDescription: This issue was addressed with improved checks. \nCVE-2021-30906: Cees Elzinga\n\nIntel Graphics Driver\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30824: Antonio Zekic (@antoniozekic) of Diverto\n\nIntel Graphics Driver\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: Multiple out-of-bounds write issues were addressed with\nimproved bounds checking. \nCVE-2021-30901: Zuozhi Fan (@pattern_F_) of Ant Security TianQiong\nLab, Yinyi Wu (@3ndy1) of Ant Security Light-Year Lab, Jack Dates of\nRET2 Systems, Inc. \n\nIOGraphics\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30821: Tim Michaud (@TimGMichaud) of Zoom Video\nCommunications\n\nIOMobileFrameBuffer\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30883: an anonymous researcher\n\nKernel\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30886: @0xalsr\n\nKernel\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30909: Zweig of Kunlun Lab\n\nKernel\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30916: Zweig of Kunlun Lab\n\nLaunchServices\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30864: Ron Hass (@ronhass7) of Perception Point\n\nLogin Window\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A person with access to a host Mac may be able to bypass the\nLogin Window in Remote Desktop for a locked instance of macOS\nDescription: This issue was addressed with improved checks. \nCVE-2021-30813: Benjamin Berger of BBetterTech LLC, Peter Goedtkindt\nof Informatique-MTF S.A., an anonymous researcher\n\nModel I/O\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing a maliciously crafted file may disclose user\ninformation\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30910: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30911: Rui Yang and Xingwei Lin of Ant Security Light-Year\nLab\n\nSandbox\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A local attacker may be able to read sensitive information\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2021-30920: Csaba Fitzl (@theevilbit) of Offensive Security\n\nSMB\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2021-30868: Peter Nguyen Vu Hoang of STAR Labs\n\nSoftwareUpdate\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may gain access to a user\u0027s Keychain\nitems\nDescription: The issue was addressed with improved permissions logic. \nCVE-2021-30912: Kirin (@Pwnrin) and chenyuwang (@mzzzz__) of Tencent\nSecurity Xuanwu Lab\n\nSoftwareUpdate\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: An unprivileged application may be able to edit NVRAM\nvariables\nDescription: The issue was addressed with improved permissions logic. \nCVE-2021-30913: Kirin (@Pwnrin) and chenyuwang (@mzzzz__) of Tencent\nSecurity Xuanwu Lab\n\nUIKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A person with physical access to an iOS device may be\ndetermine characteristics of a user\u0027s password in a secure text entry\nfield\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30915: Kostas Angelopoulos\n\nWebKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: An attacker in a privileged network position may be able to\nbypass HSTS\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30823: David Gullasch of Recurity Labs\n\nWebKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing maliciously crafted web content may lead to\nunexpectedly unenforced Content Security Policy\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30887: Narendra Bhati (@imnarendrabhati) of Suma Soft Pvt. \nLtd. \n\nWebKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious website using Content Security Policy reports may\nbe able to leak information via redirect behavior \nDescription: An information leakage issue was addressed. \nCVE-2021-30888: Prakash (@1lastBr3ath)\n\nWebKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nCVE-2021-30889: Chijin Zhou of ShuiMuYuLin Ltd and Tsinghua\nwingtecher lab\n\nWebKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30861: Wojciech Regu\u0142a (@_r3ggi), Ryan Pickren\n(ryanpickren.com)\n\nWebKit\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30890: an anonymous researcher\n\nWindows Server\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A local attacker may be able to view the previous logged in\nuser\u2019s desktop from the fast user switching screen\nDescription: An authentication issue was addressed with improved\nstate management. \nCVE-2021-30908: ASentientBot\n\nxar\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: Unpacking a maliciously crafted archive may allow an attacker\nto write arbitrary files\nDescription: This issue was addressed with improved checks. \nCVE-2021-30833: Richard Warren of NCC Group\n\nzsh\nAvailable for: Mac Pro (2013 and later), MacBook Air (Early 2015 and\nlater), MacBook Pro (Early 2015 and later), Mac mini (Late 2014 and\nlater), iMac (Late 2015 and later), MacBook (Early 2016 and later),\niMac Pro (2017 and later)\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: An inherited permissions issue was addressed with\nadditional restrictions. \nCVE-2021-30892: Jonathan Bar Or of Microsoft\n\nAdditional recognition\n\nAPFS\nWe would like to acknowledge Koh M. Nakagawa of FFRI Security, Inc. \nfor their assistance. \n\nApp Support\nWe would like to acknowledge an anonymous researcher, \u6f02\u4eae\u9f20 of \u8d5b\u535a\u56de\u5fc6\u5f55\nfor their assistance. \n\nBluetooth\nWe would like to acknowledge say2 of ENKI for their assistance. \n\nCUPS\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\niCloud\nWe would like to acknowledge Ryan Pickren (ryanpickren.com) for their\nassistance. \n\nKernel\nWe would like to acknowledge Anthony Steinhauser of Google\u0027s Safeside\nproject for their assistance. \n\nMail\nWe would like to acknowledge Fabian Ising and Damian Poddebniak of\nM\u00fcnster University of Applied Sciences for their assistance. \n\nManaged Configuration\nWe would like to acknowledge Michal Moravec of Logicworks, s.r.o. for\ntheir assistance. \n\nsmbx\nWe would like to acknowledge Zhongcheng Li (CK01) for their\nassistance. \n\nWebKit\nWe would like to acknowledge Ivan Fratric of Google Project Zero,\nPavel Gromadchuk, an anonymous researcher for their assistance. \n\nInstallation note:\nThis update may be obtained from the Mac App Store\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmF4hpwACgkQeC9qKD1p\nrhhm0Q//fIQiOk2S9w2qirXapPqEpyI9LNJnGX/RCrsZGN/iFkgvt27/RYLhHHQk\nefqxE6nnXdUaj9HoIIHiG4rKxIhfkscw1dF9igvmYm6j+V2KMiRxp1Pev1zMzsBI\nN6F7mJ4SiATHDTJATU8uCqIqHRQsvcIrHCjovblqGfuZxzvsjkvtRc0eXC0XAARf\nxW0WRNbTBoCOEsMp92hNI45B/oK05b1aHm2pY529gE6GRBBl0ymVo30fQ7vmIoJY\nUajc6pDNeJ1MhSpo0k+Z+eVodSdBN2EutKZfU5+4t2GzqeW5nLZFa/oqXObXBhXk\ni8bptOhceBu6qD9poSgkS5EdH4OdRQMcMjsQLIRJj3N/MwZBhGvsLQDlyGmtd+VG\na0s+pna/WoFwzw800CYRarmL0rRsZ4zZza0iuKArhrLlQCw+ee6XNL+1U50zvMaW\noT3gNkf3faCqQDxecIcQTj7xwt2tHV87p7uqELiuUZaCk5UoQBsWxGeGebFGxUq5\npJVQvnr4RVrDkpOQjbKj8w9mWoSZcvKlhRNL9J5kW75zd32vwnaVMlVkIG8vfvoK\nsgq/VfKrOW+EV1IMAh4iuaMiLAPjwBzMiRfjvRZFeJmTaMaTOxDKHwkG5YwPNp5W\n0FlhV1S2pAmGlQZgvTxkBthtU9A9giuH+oHSGJDjr70Q7de8lJ4=\n=3Pcg\n-----END PGP SIGNATURE-----\n\n\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202202-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: WebkitGTK+: Multiple vulnerabilities\n     Date: February 01, 2022\n     Bugs: #779175, #801400, #813489, #819522, #820434, #829723,\n           #831739\n       ID: 202202-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in WebkitGTK+, the worst of\nwhich could result in the arbitrary execution of code. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-libs/webkit-gtk        \u003c 2.34.4                    \u003e= 2.34.4\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in WebkitGTK+. Please\nreview the CVE identifiers referenced below for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll WebkitGTK+ users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/webkit-gtk-2.34.4\"\n\nReferences\n=========\n[ 1 ] CVE-2021-30848\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30848\n[ 2 ] CVE-2021-30888\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30888\n[ 3 ] CVE-2021-30682\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30682\n[ 4 ] CVE-2021-30889\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30889\n[ 5 ] CVE-2021-30666\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30666\n[ 6 ] CVE-2021-30665\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30665\n[ 7 ] CVE-2021-30890\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30890\n[ 8 ] CVE-2021-30661\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30661\n[ 9 ] WSA-2021-0005\n      https://webkitgtk.org/security/WSA-2021-0005.html\n[ 10 ] CVE-2021-30761\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30761\n[ 11 ] CVE-2021-30897\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30897\n[ 12 ] CVE-2021-30823\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30823\n[ 13 ] CVE-2021-30734\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30734\n[ 14 ] CVE-2021-30934\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30934\n[ 15 ] CVE-2021-1871\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1871\n[ 16 ] CVE-2021-30762\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30762\n[ 17 ] WSA-2021-0006\n      https://webkitgtk.org/security/WSA-2021-0006.html\n[ 18 ] CVE-2021-30797\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30797\n[ 19 ] CVE-2021-30936\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30936\n[ 20 ] CVE-2021-30663\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30663\n[ 21 ] CVE-2021-1825\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1825\n[ 22 ] CVE-2021-30951\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30951\n[ 23 ] CVE-2021-30952\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30952\n[ 24 ] CVE-2021-1788\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1788\n[ 25 ] CVE-2021-1820\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1820\n[ 26 ] CVE-2021-30953\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30953\n[ 27 ] CVE-2021-30749\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30749\n[ 28 ] CVE-2021-30849\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30849\n[ 29 ] CVE-2021-1826\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1826\n[ 30 ] CVE-2021-30836\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30836\n[ 31 ] CVE-2021-30954\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30954\n[ 32 ] CVE-2021-30984\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30984\n[ 33 ] CVE-2021-30851\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30851\n[ 34 ] CVE-2021-30758\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30758\n[ 35 ] CVE-2021-42762\n      https://nvd.nist.gov/vuln/detail/CVE-2021-42762\n[ 36 ] CVE-2021-1844\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1844\n[ 37 ] CVE-2021-30689\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30689\n[ 38 ] CVE-2021-45482\n      https://nvd.nist.gov/vuln/detail/CVE-2021-45482\n[ 39 ] CVE-2021-30858\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30858\n[ 40 ] CVE-2021-21779\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21779\n[ 41 ] WSA-2021-0004\n       https://webkitgtk.org/security/WSA-2021-0004.html\n[ 42 ] CVE-2021-30846\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30846\n[ 43 ] CVE-2021-30744\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30744\n[ 44 ] CVE-2021-30809\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30809\n[ 45 ] CVE-2021-30884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30884\n[ 46 ] CVE-2021-30720\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30720\n[ 47 ] CVE-2021-30799\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30799\n[ 48 ] CVE-2021-30795\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30795\n[ 49 ] CVE-2021-1817\n      https://nvd.nist.gov/vuln/detail/CVE-2021-1817\n[ 50 ] CVE-2021-21775\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21775\n[ 51 ] CVE-2021-30887\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30887\n[ 52 ] CVE-2021-21806\n      https://nvd.nist.gov/vuln/detail/CVE-2021-21806\n[ 53 ] CVE-2021-30818\n      https://nvd.nist.gov/vuln/detail/CVE-2021-30818\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202202-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169182"
      },
      {
        "db": "PACKETSTORM",
        "id": "164670"
      },
      {
        "db": "PACKETSTORM",
        "id": "164672"
      },
      {
        "db": "PACKETSTORM",
        "id": "164683"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      }
    ],
    "trust": 2.34
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-390620",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-30887",
        "trust": 4.0
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/12/20/6",
        "trust": 2.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164683",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "167037",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164672",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "165483",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0065",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0382",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3578",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0003",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021122705",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021102802",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022051140",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021122009",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021102715",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164670",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164682",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-390620",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30887",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169182",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165794",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169182"
      },
      {
        "db": "PACKETSTORM",
        "id": "164670"
      },
      {
        "db": "PACKETSTORM",
        "id": "164672"
      },
      {
        "db": "PACKETSTORM",
        "id": "164683"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "id": "VAR-202108-2087",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:34:08.609000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT212874 Apple\u00a0 Security update",
        "trust": 0.8,
        "url": "https://www.debian.org/security/2021/dsa-5030"
      },
      {
        "title": "Apple tvOS Fixes for permissions and access control issues vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=168366"
      },
      {
        "title": "Debian Security Advisories: DSA-5031-1 wpewebkit -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b580ac3a836730245847eb64ed334309"
      },
      {
        "title": "Debian Security Advisories: DSA-5030-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=698860a38bf11636c31ed82fe261874b"
      },
      {
        "title": "Red Hat: CVE-2021-30887",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-30887"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2088",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2088"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      },
      {
        "problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.6,
        "url": "http://www.openwall.com/lists/oss-security/2021/12/20/6"
      },
      {
        "trust": 2.4,
        "url": "https://support.apple.com/en-us/ht212867"
      },
      {
        "trust": 1.9,
        "url": "https://www.debian.org/security/2021/dsa-5031"
      },
      {
        "trust": 1.8,
        "url": "https://www.debian.org/security/2021/dsa-5030"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht212869"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht212874"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht212876"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30887"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/7eqvz3cemtinlbz7pbc7wrxvevcrhnsm/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/hqkwd4bxrdd2ygr5avu7h5j5piqieu6v/"
      },
      {
        "trust": 0.8,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/7eqvz3cemtinlbz7pbc7wrxvevcrhnsm/"
      },
      {
        "trust": 0.8,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/hqkwd4bxrdd2ygr5avu7h5j5piqieu6v/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-30887"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30890"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0003"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0065"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164672/apple-security-advisory-2021-10-26-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0382"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht212875"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkit2gtk-two-vulnerabilities-37129"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/ht212869"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167037/red-hat-security-advisory-2022-1777-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-multiple-vulnerabilities-36717"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021102715"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021102802"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3578"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022051140"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164683/apple-security-advisory-2021-10-26-7.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021122705"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021122009"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165483/ubuntu-security-notice-usn-5213-1.html"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30888"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30889"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.3,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30881"
      },
      {
        "trust": 0.3,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30886"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30952"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30984"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45482"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30953"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30936"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30897"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30954"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30934"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30951"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30917"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30915"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30919"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30907"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30910"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30905"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30909"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30906"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30894"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30883"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30895"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30896"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2023-2088.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22592"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22637"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22620"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/webkit2gtk"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30875"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30900"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30911"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30914"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212867."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30902"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212869."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30901"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30892"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30877"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30899"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30868"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30861"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30876"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30873"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30879"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30833"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30864"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30880"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212876."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30744"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1820"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0005.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30858"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30682"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30663"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1817"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42762"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30758"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30799"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21779"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202202-01"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1871"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30665"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30795"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30661"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30734"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30797"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21775"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30689"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0004.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30720"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-1788"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://webkitgtk.org/security/wsa-2021-0006.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21806"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169182"
      },
      {
        "db": "PACKETSTORM",
        "id": "164670"
      },
      {
        "db": "PACKETSTORM",
        "id": "164672"
      },
      {
        "db": "PACKETSTORM",
        "id": "164683"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "169182"
      },
      {
        "db": "PACKETSTORM",
        "id": "164670"
      },
      {
        "db": "PACKETSTORM",
        "id": "164672"
      },
      {
        "db": "PACKETSTORM",
        "id": "164683"
      },
      {
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-08-24T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "date": "2021-08-24T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "date": "2022-05-11T15:50:41",
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "date": "2021-12-28T20:12:00",
        "db": "PACKETSTORM",
        "id": "169182"
      },
      {
        "date": "2021-10-27T16:36:10",
        "db": "PACKETSTORM",
        "id": "164670"
      },
      {
        "date": "2021-10-27T16:36:46",
        "db": "PACKETSTORM",
        "id": "164672"
      },
      {
        "date": "2021-10-28T14:48:45",
        "db": "PACKETSTORM",
        "id": "164683"
      },
      {
        "date": "2022-02-01T17:03:05",
        "db": "PACKETSTORM",
        "id": "165794"
      },
      {
        "date": "2021-08-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "date": "2024-07-18T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "date": "2021-08-24T19:15:16.817000",
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390620"
      },
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-30887"
      },
      {
        "date": "2022-05-12T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      },
      {
        "date": "2024-07-18T08:32:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      },
      {
        "date": "2023-11-07T03:33:42.300000",
        "db": "NVD",
        "id": "CVE-2021-30887"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "apple\u0027s \u00a0iPadOS\u00a0 Vulnerabilities in products from multiple vendors such as",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021214"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "permissions and access control issues",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1978"
      }
    ],
    "trust": 0.6
  }
}

VAR-202004-1972

Vulnerability from variot - Updated: 2025-12-22 22:23

A race condition was addressed with additional validation. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. An application may be able to read restricted memory. plural Apple The product is vulnerable to a race condition due to a flawed validation process.You may be able to read the limited memory. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. Apple iPadOS is an operating system for iPad tablets. A race condition vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Windows-based Apple iCloud versions prior to 7.18 and 10.9.3; Windows-based iTunes versions prior to 12.10.5; iOS versions prior to 13.4; iPadOS versions prior to 13.4; Safari versions prior to 13.1; tvOS Versions prior to 13.4. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-11070) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-6237) WebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-6251) A type confusion issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8506) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8524) A memory corruption issue was addressed with improved state management. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8535) A memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8536) A memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8551) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8558) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8559) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8563) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8571) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8583) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8584) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8586) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8587) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8594) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8595) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8596) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8597) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may result in the disclosure of process memory. (CVE-2019-8607) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8608) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8609) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8610) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8611) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8615) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8619) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8622) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8623) A logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8625) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8644) A logic issue existed in the handling of synchronous page loads. This issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8649) A logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8658) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8666) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8669) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8671) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8672) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8673) A logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8674) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8676) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8677) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8678) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8679) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8680) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8681) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8683) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8684) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8686) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8687) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8688) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8689) A logic issue existed in the handling of document loads. This issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8690) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8707) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8710) A logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8719) This fixes a remote code execution in webkitgtk4. No further details are available in NIST. (CVE-2019-8720) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8726) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8733) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8735) Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in watchOS 6.1. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8743) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8763) A logic issue was addressed with improved state management. This issue is fixed in watchOS 6.1. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8764) Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in watchOS 6.1. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8765) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8766) "Clear History and Website Data" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768) An issue existed in the drawing of web page elements. The issue was addressed with improved logic. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769) This issue was addressed with improved iframe sandbox enforcement. Maliciously crafted web content may violate iframe sandboxing policy. (CVE-2019-8771) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8782) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8783) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8808) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8811) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8812) A logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8813) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8814) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8815) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8816) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8819) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8820) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8821) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8822) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8823) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8835) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8844) A use after free issue was addressed with improved memory management. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8846) WebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. This issue has been fixed in 2.28.0 with improved memory handling. (CVE-2020-10018) A use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. (CVE-2020-11793) A denial of service issue was addressed with improved memory handling. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. (CVE-2020-3864) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3865) A logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2020-3867) Multiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3868) A logic issue was addressed with improved restrictions. A file URL may be incorrectly processed. (CVE-2020-3894) A memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3895) A type confusion issue was addressed with improved memory handling. A remote attacker may be able to cause arbitrary code execution. (CVE-2020-3897) A memory consumption issue was addressed with improved memory handling. A remote attacker may be able to cause arbitrary code execution. (CVE-2020-3899) A memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3900) A type confusion issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to a cross site scripting attack. (CVE-2020-3902). Description:

Service Telemetry Framework (STF) provides automated collection of measurements and data from remote clients, such as Red Hat OpenStack Platform or third-party nodes. Dockerfiles and scripts should be amended either to refer to this new image specifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. In addition to persistent storage, Red Hat OpenShift Container Storage provisions a multicloud data management service with an S3 compatible API.

These updated images include numerous security fixes, bug fixes, and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume 1813506 - Dockerfile not compatible with docker and buildah 1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup 1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement 1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance 1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https) 1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. 1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default 1842254 - [NooBaa] Compression stats do not add up when compression id disabled 1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster 1849771 - [RFE] Account created by OBC should have same permissions as bucket owner 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot 1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume 1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume 1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS 1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount 1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params) 1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips "b" and "c" (spawned from Bug 1840084#c14) 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage 1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards 1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found 1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining 1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script 1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update Advisory ID: RHSA-2020:5633-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2020:5633 Issue date: 2021-02-24 CVE Names: CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 CVE-2021-2007 CVE-2021-3121 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.7.0 is now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.7.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2020:5634

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64

The image digest is sha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-s390x

The image digest is sha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le

The image digest is sha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor.

Security Fix(es):

  • crewjam/saml: authentication bypass in saml authentication (CVE-2020-27846)

  • golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference (CVE-2020-29652)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider (CVE-2020-8563)

  • containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters (CVE-2020-10749)

  • heketi: gluster-block volume password details available in logs (CVE-2020-10763)

  • golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash (CVE-2020-14040)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • golang-github-gorilla-websocket: integer overflow leads to denial of service (CVE-2020-27813)

  • golang: math/big: panic during recursive division of very large numbers (CVE-2020-28362)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For OpenShift Container Platform 4.7, see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1620608 - Restoring deployment config with history leads to weird state 1752220 - [OVN] Network Policy fails to work when project label gets overwritten 1756096 - Local storage operator should implement must-gather spec 1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs 1768255 - installer reports 100% complete but failing components 1770017 - Init containers restart when the exited container is removed from node. 1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating 1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset 1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale 1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating create commands 1784298 - "Displaying with reduced resolution due to large dataset." would show under some conditions 1785399 - Under condition of heavy pod creation, creation fails with 'error reserving pod name ...: name is reserved" 1797766 - Resource Requirements" specDescriptor fields - CPU and Memory injects empty string YAML editor 1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. 1805025 - [OSP] Machine status doesn't become "Failed" when creating a machine with invalid image 1805639 - Machine status should be "Failed" when creating a machine with invalid machine configuration 1806000 - CRI-O failing with: error reserving ctr name 1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be 1810438 - Installation logs are not gathered from OCP nodes 1812085 - kubernetes-networking-namespace-pods dashboard doesn't exist 1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation 1813012 - EtcdDiscoveryDomain no longer needed 1813949 - openshift-install doesn't use env variables for OS_* for some of API endpoints 1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use 1819053 - loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist 1819457 - Package Server is in 'Cannot update' status despite properly working 1820141 - [RFE] deploy qemu-quest-agent on the nodes 1822744 - OCS Installation CI test flaking 1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario 1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool 1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file 1829723 - User workload monitoring alerts fire out of the box 1832968 - oc adm catalog mirror does not mirror the index image itself 1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN 1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters 1834995 - olmFull suite always fails once th suite is run on the same cluster 1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz 1837953 - Replacing masters doesn't work for ovn-kubernetes 4.4 1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks 1838751 - [oVirt][Tracker] Re-enable skipped network tests 1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups 1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed 1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP 1841119 - Get rid of config patches and pass flags directly to kcm 1841175 - When an Install Plan gets deleted, OLM does not create a new one 1841381 - Issue with memoryMB validation 1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option 1844727 - Etcd container leaves grep and lsof zombie processes 1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs 1847074 - Filter bar layout issues at some screen widths on search page 1848358 - CRDs with preserveUnknownFields:true don't reflect in status that they are non-structural 1849543 - [4.5]kubeletconfig's description will show multiple lines for finalizers when upgrade from 4.4.8->4.5 1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service 1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard 1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing 1851693 - The oc apply should return errors instead of hanging there when failing to create the CRD 1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service 1853115 - the restriction of --cloud option should be shown in help text. 1853116 - --to option does not work with --credentials-requests flag. 1853352 - [v2v][UI] Storage Class fields Should Not be empty in VM disks view 1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash 1854567 - "Installed Operators" list showing "duplicated" entries during installation 1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present 1855351 - Inconsistent Installer reactions to Ctrl-C during user input process 1855408 - OVN cluster unstable after running minimal scale test 1856351 - Build page should show metrics for when the build ran, not the last 30 minutes 1856354 - New APIServices missing from OpenAPI definitions 1857446 - ARO/Azure: excessive pod memory allocation causes node lockup 1857877 - Operator upgrades can delete existing CSV before completion 1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed 1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created 1860136 - default ingress does not propagate annotations to route object on update 1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as "Failed" 1860518 - unable to stop a crio pod 1861383 - Route with haproxy.router.openshift.io/timeout: 365d kills the ingress controller 1862430 - LSO: PV creation lock should not be acquired in a loop 1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. 1862608 - Virtual media does not work on hosts using BIOS, only UEFI 1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network 1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff 1865839 - rpm-ostree fails with "System transaction in progress" when moving to kernel-rt 1866043 - Configurable table column headers can be illegible 1866087 - Examining agones helm chart resources results in "Oh no!" 1866261 - Need to indicate the intentional behavior for Ansible in the create api help info 1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement 1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity 1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there’s no indication on which labels offer tooltip/help 1866340 - [RHOCS Usability Study][Dashboard] It was not clear why “No persistent storage alerts” was prominently displayed 1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations 1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le & s390x 1866482 - Few errors are seen when oc adm must-gather is run 1866605 - No metadata.generation set for build and buildconfig objects 1866873 - MCDDrainError "Drain failed on , updates may be blocked" missing rendered node name 1866901 - Deployment strategy for BMO allows multiple pods to run at the same time 1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. 1867165 - Cannot assign static address to baremetal install bootstrap vm 1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig 1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS 1867477 - HPA monitoring cpu utilization fails for deployments which have init containers 1867518 - [oc] oc should not print so many goroutines when ANY command fails 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster 1867965 - OpenShift Console Deployment Edit overwrites deployment yaml 1868004 - opm index add appears to produce image with wrong registry server binary 1868065 - oc -o jsonpath prints possible warning / bug "Unable to decode server response into a Table" 1868104 - Baremetal actuator should not delete Machine objects 1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead 1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters 1868527 - OpenShift Storage using VMWare vSAN receives error "Failed to add disk 'scsi0:2'" when mounted pod is created on separate node 1868645 - After a disaster recovery pods a stuck in "NodeAffinity" state and not running 1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation 1868765 - [vsphere][ci] could not reserve an IP address: no available addresses 1868770 - catalogSource named "redhat-operators" deleted in a disconnected cluster 1868976 - Prometheus error opening query log file on EBS backed PVC 1869293 - The configmap name looks confusing in aide-ds pod logs 1869606 - crio's failing to delete a network namespace 1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes 1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] 1870373 - Ingress Operator reports available when DNS fails to provision 1870467 - D/DC Part of Helm / Operator Backed should not have HPA 1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json 1870800 - [4.6] Managed Column not appearing on Pods Details page 1871170 - e2e tests are needed to validate the functionality of the etcdctl container 1872001 - EtcdDiscoveryDomain no longer needed 1872095 - content are expanded to the whole line when only one column in table on Resource Details page 1872124 - Could not choose device type as "disk" or "part" when create localvolumeset from web console 1872128 - Can't run container with hostPort on ipv6 cluster 1872166 - 'Silences' link redirects to unexpected 'Alerts' view after creating a silence in the Developer perspective 1872251 - [aws-ebs-csi-driver] Verify job in CI doesn't check for vendor dir sanity 1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them 1872821 - [DOC] Typo in Ansible Operator Tutorial 1872907 - Fail to create CR from generated Helm Base Operator 1872923 - Click "Cancel" button on the "initialization-resource" creation form page should send users to the "Operator details" page instead of "Install Operator" page (previous page) 1873007 - [downstream] failed to read config when running the operator-sdk in the home path 1873030 - Subscriptions without any candidate operators should cause resolution to fail 1873043 - Bump to latest available 1.19.x k8s 1873114 - Nodes goes into NotReady state (VMware) 1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem 1873305 - Failed to power on /inspect node when using Redfish protocol 1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information 1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: “?” button/icon in Developer Console ->Navigation 1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working 1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name > 63 characters 1874057 - Pod stuck in CreateContainerError - error msg="container_linux.go:348: starting container process caused \"chdir to cwd (\\"/mount-point\\") set in config.json failed: permission denied\"" 1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver 1874192 - [RFE] "Create Backing Store" page doesn't allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider 1874240 - [vsphere] unable to deprovision - Runtime error list attached objects 1874248 - Include validation for vcenter host in the install-config 1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6 1874583 - apiserver tries and fails to log an event when shutting down 1874584 - add retry for etcd errors in kube-apiserver 1874638 - Missing logging for nbctl daemon 1874736 - [downstream] no version info for the helm-operator 1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution 1874968 - Accessibility: The project selection drop down is a keyboard trap 1875247 - Dependency resolution error "found more than one head for channel" is unhelpful for users 1875516 - disabled scheduling is easy to miss in node page of OCP console 1875598 - machine status is Running for a master node which has been terminated from the console 1875806 - When creating a service of type "LoadBalancer" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. 1876166 - need to be able to disable kube-apiserver connectivity checks 1876469 - Invalid doc link on yaml template schema description 1876701 - podCount specDescriptor change doesn't take effect on operand details page 1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt 1876935 - AWS volume snapshot is not deleted after the cluster is destroyed 1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted 1877105 - add redfish to enabled_bios_interfaces 1877116 - e2e aws calico tests fail with rpc error: code = ResourceExhausted 1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown 1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only 'rootDevices' 1877681 - Manually created PV can not be used 1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53 1877740 - RHCOS unable to get ip address during first boot 1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5 1877919 - panic in multus-admission-controller 1877924 - Cannot set BIOS config using Redfish with Dell iDracs 1878022 - Met imagestreamimport error when import the whole image repository 1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default "Filesystem Name" instead of providing a textbox, & the name should be validated 1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status 1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM 1878766 - CPU consumption on nodes is higher than the CPU count of the node. 1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. 1878823 - "oc adm release mirror" generating incomplete imageContentSources when using "--to" and "--to-release-image" 1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode 1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used 1878953 - RBAC error shows when normal user access pvc upload page 1878956 - oc api-resources does not include API version 1878972 - oc adm release mirror removes the architecture information 1879013 - [RFE]Improve CD-ROM interface selection 1879056 - UI should allow to change or unset the evictionStrategy 1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled 1879094 - RHCOS dhcp kernel parameters not working as expected 1879099 - Extra reboot during 4.5 -> 4.6 upgrade 1879244 - Error adding container to network "ipvlan-host-local": "master" field is required 1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder 1879282 - Update OLM references to point to the OLM's new doc site 1879283 - panic after nil pointer dereference in pkg/daemon/update.go 1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests 1879419 - [RFE]Improve boot source description for 'Container' and ‘URL’ 1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. 1879565 - IPv6 installation fails on node-valid-hostname 1879777 - Overlapping, divergent openshift-machine-api namespace manifests 1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with 'Basic', skipping basic authentication in Log message in thanos-querier pod the oauth-proxy 1879930 - Annotations shouldn't be removed during object reconciliation 1879976 - No other channel visible from console 1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. 1880148 - dns daemonset rolls out slowly in large clusters 1880161 - Actuator Update calls should have fixed retry time 1880259 - additional network + OVN network installation failed 1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as "Failed" 1880410 - Convert Pipeline Visualization node to SVG 1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn 1880443 - broken machine pool management on OpenStack 1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. 1880473 - IBM Cloudpak operators installation stuck "UpgradePending" with InstallPlan status updates failing due to size limitation 1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables) 1880785 - CredentialsRequest missing description in oc explain 1880787 - No description for Provisioning CRD for oc explain 1880902 - need dnsPlocy set in crd ingresscontrollers 1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster 1881027 - Cluster installation fails at with error : the container name \"assisted-installer\" is already in use 1881046 - [OSP] openstack-cinder-csi-driver-operator doesn't contain required manifests and assets 1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node 1881268 - Image uploading failed but wizard claim the source is available 1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration 1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup 1881881 - unable to specify target port manually resulting in application not reachable 1881898 - misalignment of sub-title in quick start headers 1882022 - [vsphere][ipi] directory path is incomplete, terraform can't find the cluster 1882057 - Not able to select access modes for snapshot and clone 1882140 - No description for spec.kubeletConfig 1882176 - Master recovery instructions don't handle IP change well 1882191 - Installation fails against external resources which lack DNS Subject Alternative Name 1882209 - [ BateMetal IPI ] local coredns resolution not working 1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from "Too large resource version" 1882268 - [e2e][automation]Add Integration Test for Snapshots 1882361 - Retrieve and expose the latest report for the cluster 1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use 1882556 - git:// protocol in origin tests is not currently proxied 1882569 - CNO: Replacing masters doesn't work for ovn-kubernetes 4.4 1882608 - Spot instance not getting created on AzureGovCloud 1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance 1882649 - IPI installer labels all images it uploads into glance as qcow2 1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic 1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page 1882660 - Operators in a namespace should be installed together when approve one 1882667 - [ovn] br-ex Link not found when scale up RHEL worker 1882723 - [vsphere]Suggested mimimum value for providerspec not working 1882730 - z systems not reporting correct core count in recording rule 1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully 1882781 - nameserver= option to dracut creates extra NM connection profile 1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined 1882844 - [IPI on vsphere] Executing 'openshift-installer destroy cluster' leaves installer tag categories in vsphere 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883388 - Bare Metal Hosts Details page doesn't show Mainitenance and Power On/Off status 1883422 - operator-sdk cleanup fail after installing operator with "run bundle" without installmode and og with ownnamespace 1883425 - Gather top installplans and their count 1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2 1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1883538 - must gather report "cannot file manila/aws ebs/ovirt csi related namespaces and objects" error 1883560 - operator-registry image needs clean up in /tmp 1883563 - Creating duplicate namespace from create namespace modal breaks the UI 1883614 - [OCP 4.6] [UI] UI should not describe power cycle as "graceful" 1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate 1883660 - e2e-metal-ipi CI job consistently failing on 4.4 1883765 - [user workload monitoring] improve latency of Thanos sidecar when streaming read requests 1883766 - [e2e][automation] Adjust tests for UI changes 1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations 1883773 - opm alpha bundle build fails on win10 home 1883790 - revert "force cert rotation every couple days for development" in 4.7 1883803 - node pull secret feature is not working as expected 1883836 - Jenkins imagestream ubi8 and nodejs12 update 1883847 - The UI does not show checkbox for enable encryption at rest for OCS 1883853 - go list -m all does not work 1883905 - race condition in opm index add --overwrite-latest 1883946 - Understand why trident CSI pods are getting deleted by OCP 1884035 - Pods are illegally transitioning back to pending 1884041 - e2e should provide error info when minimum number of pods aren't ready in kube-system namespace 1884131 - oauth-proxy repository should run tests 1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied 1884221 - IO becomes unhealthy due to a file change 1884258 - Node network alerts should work on ratio rather than absolute values 1884270 - Git clone does not support SCP-style ssh locations 1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout 1884435 - vsphere - loopback is randomly not being added to resolver 1884565 - oauth-proxy crashes on invalid usage 1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy 1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users 1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment 1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. 1884632 - Adding BYOK disk encryption through DES 1884654 - Utilization of a VMI is not populated 1884655 - KeyError on self._existing_vifs[port_id] 1884664 - Operator install page shows "installing..." instead of going to install status page 1884672 - Failed to inspect hardware. Reason: unable to start inspection: 'idrac' 1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure 1884724 - Quick Start: Serverless quickstart doesn't match Operator install steps 1884739 - Node process segfaulted 1884824 - Update baremetal-operator libraries to k8s 1.19 1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping 1885138 - Wrong detection of pending state in VM details 1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2 1885165 - NoRunningOvnMaster alert falsely triggered 1885170 - Nil pointer when verifying images 1885173 - [e2e][automation] Add test for next run configuration feature 1885179 - oc image append fails on push (uploading a new layer) 1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig 1885218 - [e2e][automation] Add virtctl to gating script 1885223 - Sync with upstream (fix panicking cluster-capacity binary) 1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2 1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2 1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2 1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2 1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2 1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI 1885315 - unit tests fail on slow disks 1885319 - Remove redundant use of group and kind of DataVolumeTemplate 1885343 - Console doesn't load in iOS Safari when using self-signed certificates 1885344 - 4.7 upgrade - dummy bug for 1880591 1885358 - add p&f configuration to protect openshift traffic 1885365 - MCO does not respect the install section of systemd files when enabling 1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating 1885398 - CSV with only Webhook conversion can't be installed 1885403 - Some OLM events hide the underlying errors 1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case 1885425 - opm index add cannot batch add multiple bundles that use skips 1885543 - node tuning operator builds and installs an unsigned RPM 1885644 - Panic output due to timeouts in openshift-apiserver 1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU < 30 || totalMemory < 72 GiB for initial deployment 1885702 - Cypress: Fix 'aria-hidden-focus' accesibility violations 1885706 - Cypress: Fix 'link-name' accesibility violation 1885761 - DNS fails to resolve in some pods 1885856 - Missing registry v1 protocol usage metric on telemetry 1885864 - Stalld service crashed under the worker node 1885930 - [release 4.7] Collect ServiceAccount statistics 1885940 - kuryr/demo image ping not working 1886007 - upgrade test with service type load balancer will never work 1886022 - Move range allocations to CRD's 1886028 - [BM][IPI] Failed to delete node after scale down 1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas 1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd 1886154 - System roles are not present while trying to create new role binding through web console 1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5->4.6 causes broadcast storm 1886168 - Remove Terminal Option for Windows Nodes 1886200 - greenwave / CVP is failing on bundle validations, cannot stage push 1886229 - Multipath support for RHCOS sysroot 1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage 1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status 1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL 1886397 - Move object-enum to console-shared 1886423 - New Affinities don't contain ID until saving 1886435 - Azure UPI uses deprecated command 'group deployment' 1886449 - p&f: add configuration to protect oauth server traffic 1886452 - layout options doesn't gets selected style on click i.e grey background 1886462 - IO doesn't recognize namespaces - 2 resources with the same name in 2 namespaces -> only 1 gets collected 1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest 1886524 - Change default terminal command for Windows Pods 1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution 1886600 - panic: assignment to entry in nil map 1886620 - Application behind service load balancer with PDB is not disrupted 1886627 - Kube-apiserver pods restarting/reinitializing periodically 1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider 1886636 - Panic in machine-config-operator 1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. 1886751 - Gather MachineConfigPools 1886766 - PVC dropdown has 'Persistent Volume' Label 1886834 - ovn-cert is mandatory in both master and node daemonsets 1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState 1886861 - ordered-values.yaml not honored if values.schema.json provided 1886871 - Neutron ports created for hostNetworking pods 1886890 - Overwrite jenkins-agent-base imagestream 1886900 - Cluster-version operator fills logs with "Manifest: ..." spew 1886922 - [sig-network] pods should successfully create sandboxes by getting pod 1886973 - Local storage operator doesn't include correctly populate LocalVolumeDiscoveryResult in console 1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO 1887010 - Imagepruner met error "Job has reached the specified backoff limit" which causes image registry degraded 1887026 - FC volume attach fails with “no fc disk found” error on OCP 4.6 PowerVM cluster 1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6 1887046 - Event for LSO need update to avoid confusion 1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image 1887375 - User should be able to specify volumeMode when creating pvc from web-console 1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console 1887392 - openshift-apiserver: delegated authn/z should have ttl > metrics/healthz/readyz/openapi interval 1887428 - oauth-apiserver service should be monitored by prometheus 1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting "degraded: False" 1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data 1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes 1887465 - Deleted project is still referenced 1887472 - unable to edit application group for KSVC via gestures (shift+Drag) 1887488 - OCP 4.6: Topology Manager OpenShift E2E test fails: gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface 1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster 1887525 - Failures to set master HardwareDetails cannot easily be debugged 1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable 1887585 - ovn-masters stuck in crashloop after scale test 1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. 1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator 1887740 - cannot install descheduler operator after uninstalling it 1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events 1887750 - oc explain localvolumediscovery returns empty description 1887751 - oc explain localvolumediscoveryresult returns empty description 1887778 - Add ContainerRuntimeConfig gatherer 1887783 - PVC upload cannot continue after approve the certificate 1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard 1887799 - User workload monitoring prometheus-config-reloader OOM 1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky 1887863 - Installer panics on invalid flavor 1887864 - Clean up dependencies to avoid invalid scan flagging 1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison 1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig 1888015 - workaround kubelet graceful termination of static pods bug 1888028 - prevent extra cycle in aggregated apiservers 1888036 - Operator details shows old CRD versions 1888041 - non-terminating pods are going from running to pending 1888072 - Setting Supermicro node to PXE boot via Redfish doesn't take affect 1888073 - Operator controller continuously busy looping 1888118 - Memory requests not specified for image registry operator 1888150 - Install Operand Form on OperatorHub is displaying unformatted text 1888172 - PR 209 didn't update the sample archive, but machineset and pdbs are now namespaced 1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build 1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5 1888311 - p&f: make SAR traffic from oauth and openshift apiserver exempt 1888363 - namespaces crash in dev 1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created 1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected 1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC 1888494 - imagepruner pod is error when image registry storage is not configured 1888565 - [OSP] machine-config-daemon-firstboot.service failed with "error reading osImageURL from rpm-ostree" 1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error 1888601 - The poddisruptionbudgets is using the operator service account, instead of gather 1888657 - oc doesn't know its name 1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable 1888671 - Document the Cloud Provider's ignore-volume-az setting 1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image 1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s", cr.GetName() 1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set 1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster 1888866 - AggregatedAPIDown permanently firing after removing APIService 1888870 - JS error when using autocomplete in YAML editor 1888874 - hover message are not shown for some properties 1888900 - align plugins versions 1888985 - Cypress: Fix 'Ensures buttons have discernible text' accesibility violation 1889213 - The error message of uploading failure is not clear enough 1889267 - Increase the time out for creating template and upload image in the terraform 1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages) 1889374 - Kiali feature won't work on fresh 4.6 cluster 1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode 1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade 1889515 - Accessibility - The symbols e.g checkmark in the Node > overview page has no text description, label, or other accessible information 1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance 1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown 1889577 - Resources are not shown on project workloads page 1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment 1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages 1889692 - Selected Capacity is showing wrong size 1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15 1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off 1889710 - Prometheus metrics on disk take more space compared to OCP 4.5 1889721 - opm index add semver-skippatch mode does not respect prerelease versions 1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn't see the Disk tab 1889767 - [vsphere] Remove certificate from upi-installer image 1889779 - error when destroying a vSphere installation that failed early 1889787 - OCP is flooding the oVirt engine with auth errors 1889838 - race in Operator update after fix from bz1888073 1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1 1889863 - Router prints incorrect log message for namespace label selector 1889891 - Backport timecache LRU fix 1889912 - Drains can cause high CPU usage 1889921 - Reported Degraded=False Available=False pair does not make sense 1889928 - [e2e][automation] Add more tests for golden os 1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName 1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings 1890074 - MCO extension kernel-headers is invalid 1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest 1890130 - multitenant mode consistently fails CI 1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e 1890145 - The mismatched of font size for Status Ready and Health Check secondary text 1890180 - FieldDependency x-descriptor doesn't support non-sibling fields 1890182 - DaemonSet with existing owner garbage collected 1890228 - AWS: destroy stuck on route53 hosted zone not found 1890235 - e2e: update Protractor's checkErrors logging 1890250 - workers may fail to join the cluster during an update from 4.5 1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member 1890270 - External IP doesn't work if the IP address is not assigned to a node 1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability 1890456 - [vsphere] mapi_instance_create_failed doesn't work on vsphere 1890467 - unable to edit an application without a service 1890472 - [Kuryr] Bulk port creation exception not completely formatted 1890494 - Error assigning Egress IP on GCP 1890530 - cluster-policy-controller doesn't gracefully terminate 1890630 - [Kuryr] Available port count not correctly calculated for alerts 1890671 - [SA] verify-image-signature using service account does not work 1890677 - 'oc image info' claims 'does not exist' for application/vnd.oci.image.manifest.v1+json manifest 1890808 - New etcd alerts need to be added to the monitoring stack 1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn't sync the "overall" sha it syncs only the sub arch sha. 1890984 - Rename operator-webhook-config to sriov-operator-webhook-config 1890995 - wew-app should provide more insight into why image deployment failed 1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call 1891047 - Helm chart fails to install using developer console because of TLS certificate error 1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler 1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI 1891108 - p&f: Increase the concurrency share of workload-low priority level 1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine) 1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown 1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn't meet requirements of chart) 1891362 - Wrong metrics count for openshift_build_result_total 1891368 - fync should be fsync for etcdHighFsyncDurations alert's annotations.message 1891374 - fync should be fsync for etcdHighFsyncDurations critical alert's annotations.message 1891376 - Extra text in Cluster Utilization charts 1891419 - Wrong detail head on network policy detail page. 1891459 - Snapshot tests should report stderr of failed commands 1891498 - Other machine config pools do not show during update 1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage 1891551 - Clusterautoscaler doesn't scale up as expected 1891552 - Handle missing labels as empty. 1891555 - The windows oc.exe binary does not have version metadata 1891559 - kuryr-cni cannot start new thread 1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11 1891625 - [Release 4.7] Mutable LoadBalancer Scope 1891702 - installer get pending when additionalTrustBundle is added into install-config.yaml 1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails 1891740 - OperatorStatusChanged is noisy 1891758 - the authentication operator may spam DeploymentUpdated event endlessly 1891759 - Dockerfile builds cannot change /etc/pki/ca-trust 1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1 1891825 - Error message not very informative in case of mode mismatch 1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. 1891951 - UI should show warning while creating pools with compression on 1891952 - [Release 4.7] Apps Domain Enhancement 1891993 - 4.5 to 4.6 upgrade doesn't remove deployments created by marketplace 1891995 - OperatorHub displaying old content 1891999 - Storage efficiency card showing wrong compression ratio 1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.28' not found (required by ./opm) 1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. 1892198 - TypeError in 'Performance Profile' tab displayed for 'Performance Addon Operator' 1892288 - assisted install workflow creates excessive control-plane disruption 1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config 1892358 - [e2e][automation] update feature gate for kubevirt-gating job 1892376 - Deleted netnamespace could not be re-created 1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky 1892393 - TestListPackages is flaky 1892448 - MCDPivotError alert/metric missing 1892457 - NTO-shipped stalld needs to use FIFO for boosting. 1892467 - linuxptp-daemon crash 1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env 1892653 - User is unable to create KafkaSource with v1beta 1892724 - VFS added to the list of devices of the nodeptpdevice CRD 1892799 - Mounting additionalTrustBundle in the operator 1893117 - Maintenance mode on vSphere blocks installation. 1893351 - TLS secrets are not able to edit on console. 1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots 1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky "worker" assumption when guessing about ingress availability 1893546 - Deploy using virtual media fails on node cleaning step 1893601 - overview filesystem utilization of OCP is showing the wrong values 1893645 - oc describe route SIGSEGV 1893648 - Ironic image building process is not compatible with UEFI secure boot 1893724 - OperatorHub generates incorrect RBAC 1893739 - Force deletion doesn't work for snapshots if snapshotclass is already deleted 1893776 - No useful metrics for image pull time available, making debugging issues there impossible 1893798 - Lots of error messages starting with "get namespace to enqueue Alertmanager instances failed" in the logs of prometheus-operator 1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD 1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS 1893926 - Some "Dynamic PV (block volmode)" pattern storage e2e tests are wrongly skipped 1893944 - Wrong product name for Multicloud Object Gateway 1893953 - (release-4.7) Gather default StatefulSet configs 1893956 - Installation always fails at "failed to initialize the cluster: Cluster operator image-registry is still updating" 1893963 - [Testday] Workloads-> Virtualization is not loading for Firefox browser 1893972 - Should skip e2e test cases as early as possible 1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without 'https://' 1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective 1894025 - OCP 4.5 to 4.6 upgrade for "aws-ebs-csi-driver-operator" fails when "defaultNodeSelector" is set 1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. 1894065 - tag new packages to enable TLS support 1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0 1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries 1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM 1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted 1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI) 1894216 - Improve OpenShift Web Console availability 1894275 - Fix CRO owners file to reflect node owner 1894278 - "database is locked" error when adding bundle to index image 1894330 - upgrade channels needs to be updated for 4.7 1894342 - oauth-apiserver logs many "[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient" 1894374 - Dont prevent the user from uploading a file with incorrect extension 1894432 - [oVirt] sometimes installer timeout on tmp_import_vm 1894477 - bash syntax error in nodeip-configuration.service 1894503 - add automated test for Polarion CNV-5045 1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform 1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets 1894645 - Cinder volume provisioning crashes on nil cloud provider 1894677 - image-pruner job is panicking: klog stack 1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0 1894860 - 'backend' CI job passing despite failing tests 1894910 - Update the node to use the real-time kernel fails 1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package 1895065 - Schema / Samples / Snippets Tabs are all selected at the same time 1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI 1895141 - panic in service-ca injector 1895147 - Remove memory limits on openshift-dns 1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation 1895268 - The bundleAPIs should NOT be empty 1895309 - [OCP v47] The RHEL node scaleup fails due to "No package matching 'cri-o-1.19.*' found available" on OCP 4.7 cluster 1895329 - The infra index filled with warnings "WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release" 1895360 - Machine Config Daemon removes a file although its defined in the dropin 1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1 1895372 - Web console going blank after selecting any operator to install from OperatorHub 1895385 - Revert KUBELET_LOG_LEVEL back to level 3 1895423 - unable to edit an application with a custom builder image 1895430 - unable to edit custom template application 1895509 - Backup taken on one master cannot be restored on other masters 1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image 1895838 - oc explain description contains '/' 1895908 - "virtio" option is not available when modifying a CD-ROM to disk type 1895909 - e2e-metal-ipi-ovn-dualstack is failing 1895919 - NTO fails to load kernel modules 1895959 - configuring webhook token authentication should prevent cluster upgrades 1895979 - Unable to get coreos-installer with --copy-network to work 1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV 1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded) 1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed 1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest 1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded 1896244 - Found a panic in storage e2e test 1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general 1896302 - [e2e][automation] Fix 4.6 test failures 1896365 - [Migration]The SDN migration cannot revert under some conditions 1896384 - [ovirt IPI]: local coredns resolution not working 1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6 1896529 - Incorrect instructions in the Serverless operator and application quick starts 1896645 - documentationBaseURL needs to be updated for 4.7 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty 1896704 - Machine API components should honour cluster wide proxy settings 1896732 - "Attach to Virtual Machine OS" button should not be visible on old clusters 1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection is incompatible with SR-IOV operator 1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails 1896918 - start creating new-style Secrets for AWS 1896923 - DNS pod /metrics exposed on anonymous http port 1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1897003 - VNC console cannot be connected after visit it in new window 1897008 - Cypress: reenable check for 'aria-hidden-focus' rule & checkA11y test for modals 1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO 1897039 - router pod keeps printing log: template "msg"="router reloaded" "output"="[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option 'http-use-htx' is deprecated and ignored 1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. 1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces 1897138 - oVirt provider uses depricated cluster-api project 1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly 1897252 - Firing alerts are not showing up in console UI after cluster is up for some time 1897354 - Operator installation showing success, but Provided APIs are missing 1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with "connection refused" 1897412 - [sriov]disableDrain did not be updated in CRD of manifest 1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page 1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to 'localhost' 1897520 - After restarting nodes the image-registry co is in degraded true state. 1897584 - Add casc plugins 1897603 - Cinder volume attachment detection failure in Kubelet 1897604 - Machine API deployment fails: Kube-Controller-Manager can't reach API: "Unauthorized" 1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests 1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition 1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannotCreate OCS Cluster Service1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing 1897897 - ptp lose sync openshift 4.6 1898036 - no network after reboot (IPI) 1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically 1898097 - mDNS floods the baremetal network 1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem 1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied 1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster 1898174 - [OVN] EgressIP does not guard against node IP assignment 1898194 - GCP: can't install on custom machine types 1898238 - Installer validations allow same floating IP for API and Ingress 1898268 - [OVN]:make checkbroken on 4.6 1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default 1898320 - Incorrect Apostrophe Translation of "it's" in Scheduling Disabled Popover 1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. 1898407 - [Deployment timing regression] Deployment takes longer with 4.7 1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service 1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine 1898500 - Failure to upgrade operator when a Service is included in a Bundle 1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic 1898532 - Display names defined in specDescriptors not respected 1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted 1898613 - Whereabouts should exclude IPv6 ranges 1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase 1898679 - Operand creation form - Required "type: object" properties (Accordion component) are missing red asterisk 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator 1898839 - Wrong YAML in operator metadata 1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job 1898873 - Remove TechPreview Badge from Monitoring 1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way 1899111 - [RFE] Update jenkins-maven-agen to maven36 1899128 - VMI details screen -> show the warning that it is preferable to have a VM only if the VM actually does not exist 1899175 - bump the RHCOS boot images for 4.7 1899198 - Use new packages for ipa ramdisks 1899200 - In Installed Operators page I cannot search for an Operator by it's name 1899220 - Support AWS IMDSv2 1899350 - configure-ovs.sh doesn't configure bonding options 1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error "An error occurred Not Found" 1899459 - Failed to start monitoring pods once the operator removed from override list of CVO 1899515 - Passthrough credentials are not immediately re-distributed on update 1899575 - update discovery burst to reflect lots of CRDs on openshift clusters 1899582 - update discovery burst to reflect lots of CRDs on openshift clusters 1899588 - Operator objects are re-created after all other associated resources have been deleted 1899600 - Increased etcd fsync latency as of OCP 4.6 1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup 1899627 - Project dashboard Active status using small icon 1899725 - Pods table does not wrap well with quick start sidebar open 1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD) 1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality 1899835 - catalog-operator repeatedly crashes with "runtime error: index out of range [0] with length 0" 1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap 1899853 - additionalSecurityGroupIDs not working for master nodes 1899922 - NP changes sometimes influence new pods. 1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet 1900008 - Fix internationalized sentence fragments in ImageSearch.tsx 1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx 1900020 - Remove &apos; from internationalized keys 1900022 - Search Page - Top labels field is not applied to selected Pipeline resources 1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently 1900126 - Creating a VM results in suggestion to create a default storage class when one already exists 1900138 - [OCP on RHV] Remove insecure mode from the installer 1900196 - stalld is not restarted after crash 1900239 - Skip "subPath should be able to unmount" NFS test 1900322 - metal3 pod's toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists 1900377 - [e2e][automation] create new css selector for active users 1900496 - (release-4.7) Collect spec config for clusteroperator resources 1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks 1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue 1900759 - include qemu-guest-agent by default 1900790 - Track all resource counts via telemetry 1900835 - Multus errors when cachefile is not found 1900935 -oc adm release mirrorpanic panic: runtime error 1900989 - accessing the route cannot wake up the idled resources 1901040 - When scaling down the status of the node is stuck on deleting 1901057 - authentication operator health check failed when installing a cluster behind proxy 1901107 - pod donut shows incorrect information 1901111 - Installer dependencies are broken 1901200 - linuxptp-daemon crash when enable debug log level 1901301 - CBO should handle platform=BM without provisioning CR 1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly 1901363 - High Podready Latency due to timed out waiting for annotations 1901373 - redundant bracket on snapshot restore button 1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with "timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true" 1901395 - "Edit virtual machine template" action link should be removed 1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting 1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP 1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema 1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod "before all" hook for "creates the resource instance" 1901604 - CNO blocks editing Kuryr options 1901675 - [sig-network] multicast when using one of the plugins 'redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy' should allow multicast traffic in namespaces where it is enabled 1901909 - The device plugin pods / cni pod are restarted every 5 minutes 1901982 - [sig-builds][Feature:Builds] build can reference a cluster service with a build being created from new-build should be able to run a build that references a cluster service 1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error 1902059 - Wire a real signer for service accout issuer 1902091 -cluster-image-registry-operatorpod leaves connections open when fails connecting S3 storage 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902157 - The DaemonSet machine-api-termination-handler couldn't allocate Pod 1902253 - MHC status doesnt set RemediationsAllowed = 0 1902299 - Failed to mirror operator catalog - error: destination registry required 1902545 - Cinder csi driver node pod should add nodeSelector for Linux 1902546 - Cinder csi driver node pod doesn't run on master node 1902547 - Cinder csi driver controller pod doesn't run on master node 1902552 - Cinder csi driver does not use the downstream images 1902595 - Project workloads list view doesn't show alert icon and hover message 1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent 1902601 - Cinder csi driver pods run as BestEffort qosClass 1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group 1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails 1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked 1902824 - failed to generate semver informed package manifest: unable to determine default channel 1902894 - hybrid-overlay-node crashing trying to get node object during initialization 1902969 - Cannot load vmi detail page 1902981 - It should default to current namespace when create vm from template 1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file via s3:// URI 1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry 1903034 - OLM continuously printing debug logs 1903062 - [Cinder csi driver] Deployment mounted volume have no write access 1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready 1903107 - Enable vsphere-problem-detector e2e tests 1903164 - OpenShift YAML editor jumps to top every few seconds 1903165 - Improve Canary Status Condition handling for e2e tests 1903172 - Column Management: Fix sticky footer on scroll 1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled 1903188 - [Descheduler] cluster log reports failed to validate server configuration" err="unsupported log format: 1903192 - Role name missing on create role binding form 1903196 - Popover positioning is misaligned for Overview Dashboard status items 1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. 1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components 1903248 - Backport Upstream Static Pod UID patch 1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests] 1903290 - Kubelet repeatedly log the same log line from exited containers 1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. 1903382 - Panic when task-graph is canceled with a TaskNode with no tasks 1903400 - Migrate a VM which is not running goes to pending state 1903402 - Nic/Disk on VMI overview should link to VMI's nic/disk page 1903414 - NodePort is not working when configuring an egress IP address 1903424 - mapi_machine_phase_transition_seconds_sum doesn't work 1903464 - "Evaluating rule failed" for "record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" and "record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum" 1903639 - Hostsubnet gatherer produces wrong output 1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service 1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started 1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image 1903717 - Handle different Pod selectors for metal3 Deployment 1903733 - Scale up followed by scale down can delete all running workers 1903917 - Failed to load "Developer Catalog" page 1903999 - Httplog response code is always zero 1904026 - The quota controllers should resync on new resources and make progress 1904064 - Automated cleaning is disabled by default 1904124 - DHCP to static lease script doesn't work correctly if starting with infinite leases 1904125 - Boostrap VM .ign image gets added into 'default' pool instead of <cluster-name>-<id>-bootstrap 1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails 1904133 - KubeletConfig flooded with failure conditions 1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart 1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi ! 1904244 - MissingKey errors for two plugins using i18next.t 1904262 - clusterresourceoverride-operator has version: 1.0.0 every build 1904296 - VPA-operator has version: 1.0.0 every build 1904297 - The index image generated by "opm index prune" leaves unrelated images 1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards 1904385 - [oVirt] registry cannot mount volume on 4.6.4 -> 4.6.6 upgrade 1904497 - vsphere-problem-detector: Run on vSphere cloud only 1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set 1904502 - vsphere-problem-detector: allow longer timeouts for some operations 1904503 - vsphere-problem-detector: emit alerts 1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody) 1904578 - metric scraping for vsphere problem detector is not configured 1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -> 4.6.6 upgrade 1904663 - IPI pointer customization MachineConfig always generated 1904679 - [Feature:ImageInfo] Image info should display information about images 1904683 -[sig-builds][Feature:Builds] s2i build with a root user imagetests use docker.io image 1904684 - [sig-cli] oc debug ensure it works with image streams 1904713 - Helm charts with kubeVersion restriction are filtered incorrectly 1904776 - Snapshot modal alert is not pluralized 1904824 - Set vSphere hostname from guestinfo before NM starts 1904941 - Insights status is always showing a loading icon 1904973 - KeyError: 'nodeName' on NP deletion 1904985 - Prometheus and thanos sidecar targets are down 1904993 - Many ampersand special characters are found in strings 1905066 - QE - Monitoring test cases - smoke test suite automation 1905074 - QE -Gherkin linter to maintain standards 1905100 - Too many haproxy processes in default-router pod causing high load average 1905104 - Snapshot modal disk items missing keys 1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm 1905119 - Race in AWS EBS determining whether custom CA bundle is used 1905128 - [e2e][automation] e2e tests succeed without actually execute 1905133 - operator conditions special-resource-operator 1905141 - vsphere-problem-detector: report metrics through telemetry 1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures 1905194 - Detecting broken connections to the Kube API takes up to 15 minutes 1905221 - CVO transitions from "Initializing" to "Updating" despite not attempting many manifests 1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP 1905253 - Inaccurate text at bottom of Events page 1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905299 - OLM fails to update operator 1905307 - Provisioning CR is missing from must-gather 1905319 - cluster-samples-operator containers are not requesting required memory resource 1905320 - csi-snapshot-webhook is not requesting required memory resource 1905323 - dns-operator is not requesting required memory resource 1905324 - ingress-operator is not requesting required memory resource 1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory 1905328 - Changing the bound token service account issuer invalids previously issued bound tokens 1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory 1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory 1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails 1905347 - QE - Design Gherkin Scenarios 1905348 - QE - Design Gherkin Scenarios 1905362 - [sriov] Error message 'Fail to update DaemonSet' always shown in sriov operator pod 1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted 1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input 1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation 1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1 1905404 - The example of "Remove the entrypoint on the mysql:latest image" foroc image appenddoes not work 1905416 - Hyperlink not working from Operator Description 1905430 - usbguard extension fails to install because of missing correct protobuf dependency version 1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads 1905502 - Test flake - unable to get https transport for ephemeral-registry 1905542 - [GSS] The "External" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. 1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs 1905610 - Fix typo in export script 1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster 1905640 - Subscription manual approval test is flaky 1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry 1905696 - ClusterMoreUpdatesModal component did not get internationalized 1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes 1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project 1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster 1905792 - [OVN]Cannot create egressfirewalll with dnsName 1905889 - Should create SA for each namespace that the operator scoped 1905920 - Quickstart exit and restart 1905941 - Page goes to error after create catalogsource 1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711 1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters 1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected 1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it 1906118 - OCS feature detection constantly polls storageclusters and storageclasses 1906120 - 'Create Role Binding' form not setting user or group value when created from a user or group resource 1906121 - [oc] After new-project creation, the kubeconfig file does not set the project 1906134 - OLM should not create OperatorConditions for copied CSVs 1906143 - CBO supports log levels 1906186 - i18n: Translators are not able to translatethiswithout context for alert manager config 1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots 1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. 1906276 -oc image appendcan't work with multi-arch image with --filter-by-os='.*' 1906318 - use proper term for Authorized SSH Keys 1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional 1906356 - Unify Clone PVC boot source flow with URL/Container boot source 1906397 - IPA has incorrect kernel command line arguments 1906441 - HorizontalNav and NavBar have invalid keys 1906448 - Deploy using virtualmedia with provisioning network disabled fails - 'Failed to connect to the agent' in ironic-conductor log 1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project 1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node's memory and killing them 1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures 1906511 - Root reprovisioning tests flaking often in CI 1906517 - Validation is not robust enough and may prevent to generate install-confing. 1906518 - Update snapshot API CRDs to v1 1906519 - Update LSO CRDs to use v1 1906570 - Number of disruptions caused by reboots on a cluster cannot be measured 1906588 - [ci][sig-builds] nodes is forbidden: User "e2e-test-jenkins-pipeline-xfghs-user" cannot list resource "nodes" in API group "" at the cluster scope 1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs 1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs 1906679 - quick start panel styles are not loaded 1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber 1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form 1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created 1906689 - user can pin to nav configmaps and secrets multiple times 1906691 - Add doc which describes disabling helm chart repository 1906713 - Quick starts not accesible for a developer user 1906718 - helm chart "provided by Redhat" is misspelled 1906732 - Machine API proxy support should be tested 1906745 - Update Helm endpoints to use Helm 3.4.x 1906760 - performance issues with topology constantly re-rendering 1906766 - localizedAutoscaled&Autoscalingpod texts overlap with the pod ring 1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section 1906769 - topology fails to load with non-kubeadmin user 1906770 - shortcuts on mobiles view occupies a lot of space 1906798 - Dev catalog customization doesn't update console-config ConfigMap 1906806 - Allow installing extra packages in ironic container images 1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer 1906835 - Topology view shows add page before then showing full project workloads 1906840 - ClusterOperator should not have status "Updating" if operator version is the same as the release version 1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy 1906860 - Bump kube dependencies to v1.20 for Net Edge components 1906864 - Quick Starts Tour: Need to adjust vertical spacing 1906866 - Translations of Sample-Utils 1906871 - White screen when sort by name in monitoring alerts page 1906872 - Pipeline Tech Preview Badge Alignment 1906875 - Provide an option to force backup even when API is not available. 1906877 - Placeholder' value in search filter do not match column heading in Vulnerabilities 1906879 - Add missing i18n keys 1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install 1906896 - No Alerts causes odd empty Table (Need no content message) 1906898 - Missing User RoleBindings in the Project Access Web UI 1906899 - Quick Start - Highlight Bounding Box Issue 1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1 1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers 1906935 - Delete resources when Provisioning CR is deleted 1906968 - Must-gather should support collecting kubernetes-nmstate resources 1906986 - Ensure failed pod adds are retried even if the pod object doesn't change 1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt 1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change 1907211 - beta promotion of p&f switched storage version to v1beta1, making downgrades impossible. 1907269 - Tooltips data are different when checking stack or not checking stack for the same time 1907280 - Install tour of OCS not available. 1907282 - Topology page breaks with white screen 1907286 - The default mhc machine-api-termination-handler couldn't watch spot instance 1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent 1907293 - Increase timeouts in e2e tests 1907295 - Gherkin script for improve management for helm 1907299 - Advanced Subscription Badge for KMS and Arbiter not present 1907303 - Align VM template list items by baseline 1907304 - Use PF styles for selected template card in VM Wizard 1907305 - Drop 'ISO' from CDROM boot source message 1907307 - Support and provider labels should be passed on between templates and sources 1907310 - Pin action should be renamed to favorite 1907312 - VM Template source popover is missing info about added date 1907313 - ClusterOperator objects cannot be overriden with cvo-overrides 1907328 - iproute-tc package is missing in ovn-kube image 1907329 - CLUSTER_PROFILE env. variable is not used by the CVO 1907333 - Node stuck in degraded state, mcp reports "Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached" 1907373 - Rebase to kube 1.20.0 1907375 - Bump to latest available 1.20.x k8s - workloads team 1907378 - Gather netnamespaces networking info 1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity 1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn't match the CSV one 1907390 - prometheus-adapter: panic after k8s 1.20 bump 1907399 - build log icon link on topology nodes cause app to reload 1907407 - Buildah version not accessible 1907421 - [4.6.1]oc-image-mirror command failed on "error: unable to copy layer" 1907453 - Dev Perspective -> running vm details -> resources -> no data 1907454 - Install PodConnectivityCheck CRD with CNO 1907459 - "The Boot source is also maintained by Red Hat." is always shown for all boot sources 1907475 - Unable to estimate the error rate of ingress across the connected fleet 1907480 -Active alertssection throwing forbidden error for users. 1907518 - Kamelets/Eventsource should be shown to user if they have create access 1907543 - Korean timestamps are shown when users' language preferences are set to German-en-en-US 1907610 - Update kubernetes deps to 1.20 1907612 - Update kubernetes deps to 1.20 1907621 - openshift/installer: bump cluster-api-provider-kubevirt version 1907628 - Installer does not set primary subnet consistently 1907632 - Operator Registry should update its kubernetes dependencies to 1.20 1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters 1907644 - fix up handling of non-critical annotations on daemonsets/deployments 1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?) 1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication 1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail 1907767 - [e2e][automation]update test suite for kubevirt plugin 1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don't allow master and worker nodes to boot 1907792 - Theoverridesof the OperatorCondition cannot block the operator upgrade 1907793 - Surface support info in VM template details 1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage 1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set 1907863 - Quickstarts status not updating when starting the tour 1907872 - dual stack with an ipv6 network fails on bootstrap phase 1907874 - QE - Design Gherkin Scenarios for epic ODC-5057 1907875 - No response when try to expand pvc with an invalid size 1907876 - Refactoring record package to make gatherer configurable 1907877 - QE - Automation- pipelines builder scripts 1907883 - Fix Pipleine creation without namespace issue 1907888 - Fix pipeline list page loader 1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form 1907892 - Unable to edit application deployed using "From Devfile" option 1907893 - navSortUtils.spec.ts unit test failure 1907896 - When a workload is added, Topology does not place the new items well 1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template 1907924 - Enable madvdontneed in OpenShift Images 1907929 - Enable madvdontneed in OpenShift System Components Part 2 1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot 1907947 - The kubeconfig saved in tenantcluster shouldn't include anything that is not related to the current context 1907948 - OCM-O bump to k8s 1.20 1907952 - bump to k8s 1.20 1907972 - Update OCM link to open Insights tab 1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI 1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916 1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni 1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk 1908035 - dynamic-demo-plugin build does not generate dist directory 1908135 - quick search modal is not centered over topology 1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled 1908159 - [AWS C2S] MCO fails to sync cloud config 1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384) 1908180 - Add source for template is stucking in preparing pvc 1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens 1908231 - [Migration] The pods ovnkube-node are in CrashLoopBackOff after SDN to OVN 1908277 - QE - Automation- pipelines actions scripts 1908280 - Documentation describingignore-volume-azis incorrect 1908296 - Fix pipeline builder form yaml switcher validation issue 1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI 1908323 - Create button missing for PLR in the search page 1908342 - The new pv_collector_total_pv_count is not reported via telemetry 1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name 1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots 1908349 - Volume snapshot tests are failing after 1.20 rebase 1908353 - QE - Automation- pipelines runs scripts 1908361 - bump to k8s 1.20 1908367 - QE - Automation- pipelines triggers scripts 1908370 - QE - Automation- pipelines secrets scripts 1908375 - QE - Automation- pipelines workspaces scripts 1908381 - Go Dependency Fixes for Devfile Lib 1908389 - Loadbalancer Sync failing on Azure 1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived 1908407 - Backport Upstream 95269 to fix potential crash in kubelet 1908410 - Exclude Yarn from VSCode search 1908425 - Create Role Binding form subject type and name are undefined when All Project is selected 1908431 - When the marketplace-operator pod get's restarted, the custom catalogsources are gone, as well as the pods 1908434 - Remove &apos from metal3-plugin internationalized strings 1908437 - Operator backed with no icon has no badge associated with the CSV tag 1908459 - bump to k8s 1.20 1908461 - Add bugzilla component to OWNERS file 1908462 - RHCOS 4.6 ostree removed dhclient 1908466 - CAPO AZ Screening/Validating 1908467 - Zoom in and zoom out in topology package should be sentence case 1908468 - [Azure][4.7] Installer can't properly parse instance type with non integer memory size 1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster 1908471 - OLM should bump k8s dependencies to 1.20 1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests 1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1908545 - VM clone dialog does not open 1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard 1908562 - Pod readiness is not being observed in real world cases 1908565 - [4.6] Cannot filter the platform/arch of the index image 1908573 - Align the style of flavor 1908583 - bootstrap does not run on additional networks if configured for master in install-config 1908596 - Race condition on operator installation 1908598 - Persistent Dashboard shows events for all provisioners 1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state 1908648 - Skip TestKernelType test on OKD, adjust TestExtensions 1908650 - The title of customize wizard is inconsistent 1908654 - cluster-api-provider: volumes and disks names shouldn't change by machine-api-operator 1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s] 1908687 - Option to save user settings separate when using local bridge (affects console developers only) 1908697 - Showkubectl diff command in the oc diff help page 1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom 1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds 1908717 - "missing unit character in duration" error in some network dashboards 1908746 - [Safari] Drop Shadow doesn't works as expected on hover on workload 1908747 - stale S3 CredentialsRequest in CCO manifest 1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase 1908830 - RHCOS 4.6 - Missing Initiatorname 1908868 - Update empty state message for EventSources and Channels tab 1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes 1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference 1908888 - Dualstack does not work with multiple gateways 1908889 - Bump CNO to k8s 1.20 1908891 - TestDNSForwarding DNS operator e2e test is failing frequently 1908914 - CNO: upgrade nodes before masters 1908918 - Pipeline builder yaml view sidebar is not responsive 1908960 - QE - Design Gherkin Scenarios 1908971 - Gherkin Script for pipeline debt 4.7 1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated 1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console 1908998 - [cinder-csi-driver] doesn't detect the credentials change 1909004 - "No datapoints found" for RHEL node's filesystem graph 1909005 - i18n: workloads list view heading is not translated 1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects 1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type 1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware 1909067 - Web terminal should keep latest output when connection closes 1909070 - PLR and TR Logs component is not streaming as fast as tkn 1909092 - Error Message should not confuse user on Channel form 1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page 1909108 - Machine API components should use 1.20 dependencies 1909116 - Catalog Sort Items dropdown is not aligned on Firefox 1909198 - Move Sink action option is not working 1909207 - Accessibility Issue on monitoring page 1909236 - Remove pinned icon overlap on resource name 1909249 - Intermittent packet drop from pod to pod 1909276 - Accessibility Issue on create project modal 1909289 - oc debug of an init container no longer works 1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2 1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle 1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it 1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O 1909464 - Build operator-registry with golang-1.15 1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found 1909521 - Add kubevirt cluster type for e2e-test workflow 1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created 1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node 1909610 - Fix available capacity when no storage class selected 1909678 - scale up / down buttons available on pod details side panel 1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined 1909739 - Arbiter request data changes 1909744 - cluster-api-provider-openstack: Bump gophercloud 1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline 1909791 - Update standalone kube-proxy config for EndpointSlice 1909792 - Empty states for some details page subcomponents are not i18ned 1909815 - Perspective switcher is only half-i18ned 1909821 - OCS 4.7 LSO installation blocked because of Error "Invalid value: "integer": spec.flexibleScaling in body 1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn't installed in CI 1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing 1909911 - [OVN]EgressFirewall caused a segfault 1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument 1909958 - Support Quick Start Highlights Properly 1909978 - ignore-volume-az = yes not working on standard storageClass 1909981 - Improve statement in template select step 1909992 - Fail to pull the bundle image when using the private index image 1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev 1910036 - QE - Design Gherkin Scenarios ODC-4504 1910049 - UPI: ansible-galaxy is not supported 1910127 - [UPI on oVirt]: Improve UPI Documentation 1910140 - fix the api dashboard with changes in upstream kube 1.20 1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment's containers with the OPERATOR_CONDITION_NAME Environment Variable 1910165 - DHCP to static lease script doesn't handle multiple addresses 1910305 - [Descheduler] - The minKubeVersion should be 1.20.0 1910409 - Notification drawer is not localized for i18n 1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials 1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation 1910501 - Installed Operators->Operand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page 1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work 1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready 1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability 1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded 1910739 - Redfish-virtualmedia (idrac) deploy fails on "The Virtual Media image server is already connected" 1910753 - Support Directory Path to Devfile 1910805 - Missing translation for Pipeline status and breadcrumb text 1910829 - Cannot delete a PVC if the dv's phase is WaitForFirstConsumer 1910840 - Show Nonexistent command info in theoc rollback -hhelp page 1910859 - breadcrumbs doesn't use last namespace 1910866 - Unify templates string 1910870 - Unify template dropdown action 1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6 1911129 - Monitoring charts renders nothing when switching from a Deployment to "All workloads" 1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard 1911212 - [MSTR-998] API Performance Dashboard "Period" drop-down has a choice "$__auto_interval_period" which can bring "1:154: parse error: missing unit character in duration" 1911213 - Wrong and misleading warning for VMs that were created manually (not from template) 1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created 1911269 - waiting for the build message present when build exists 1911280 - Builder images are not detected for Dotnet, Httpd, NGINX 1911307 - Pod Scale-up requires extra privileges in OpenShift web-console 1911381 - "Select Persistent Volume Claim project" shows in customize wizard when select a source available template 1911382 - "source volumeMode (Block) and target volumeMode (Filesystem) do not match" shows in VM Error 1911387 - Hit error - "Cannot read property 'value' of undefined" while creating VM from template 1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation 1911418 - [v2v] The target storage class name is not displayed if default storage class is used 1911434 - git ops empty state page displays icon with watermark 1911443 - SSH Cretifiaction field should be validated 1911465 - IOPS display wrong unit 1911474 - Devfile Application Group Does Not Delete Cleanly (errors) 1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController 1911574 - Expose volume mode on Upload Data form 1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined 1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel 1911656 - using 'operator-sdk run bundle' to install operator successfully, but the command output said 'Failed to run bundle'' 1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state 1911782 - Descheduler should not evict pod used local storage by the PVC 1911796 - uploading flow being displayed before submitting the form 1912066 - The ansible type operator's manager container is not stable when managing the CR 1912077 - helm operator's default rbac forbidden 1912115 - [automation] Analyze job keep failing because of 'JavaScript heap out of memory' 1912237 - Rebase CSI sidecars for 4.7 1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page 1912409 - Fix flow schema deployment 1912434 - Update guided tour modal title 1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken 1912523 - Standalone pod status not updating in topology graph 1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion 1912558 - TaskRun list and detail screen doesn't show Pending status 1912563 - p&f: carry 97206: clean up executing request on panic 1912565 - OLM macOS local build broken by moby/term dependency 1912567 - [OCP on RHV] Node becomes to 'NotReady' status when shutdown vm from RHV UI only on the second deletion 1912577 - 4.1/4.2->4.3->...-> 4.7 upgrade is stuck during 4.6->4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff 1912590 - publicImageRepository not being populated 1912640 - Go operator's controller pods is forbidden 1912701 - Handle dual-stack configuration for NIC IP 1912703 - multiple queries can't be plotted in the same graph under some conditons 1912730 - Operator backed: In-context should support visual connector if SBO is not installed 1912828 - Align High Performance VMs with High Performance in RHV-UI 1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates 1912852 - VM from wizard - available VM templates - "storage" field is "0 B" 1912888 - recycler template should be moved to KCM operator 1912907 - Helm chart repository index can contain unresolvable relative URL's 1912916 - Set external traffic policy to cluster for IBM platform 1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller 1912938 - Update confirmation modal for quick starts 1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment 1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment 1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver 1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver 1912977 - rebase upstream static-provisioner 1913006 - Remove etcd v2 specific alerts with etcd_http* metrics 1913011 - [OVN] Pod's external traffic not use egressrouter macvlan ip as a source ip 1913037 - update static-provisioner base image 1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state 1913085 - Regression OLM uses scoped client for CRD installation 1913096 - backport: cadvisor machine metrics are missing in k8s 1.19 1913132 - The installation of Openshift Virtualization reports success early before it 's succeeded eventually 1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root 1913196 - Guided Tour doesn't handle resizing of browser 1913209 - Support modal should be shown for community supported templates 1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort 1913249 - update info alert this template is not aditable 1913285 - VM list empty state should link to virtualization quick starts 1913289 - Rebase AWS EBS CSI driver for 4.7 1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled 1913297 - Remove restriction of taints for arbiter node 1913306 - unnecessary scroll bar is present on quick starts panel 1913325 - 1.20 rebase for openshift-apiserver 1913331 - Import from git: Fails to detect Java builder 1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used 1913343 - (release-4.7) Added changelog file for insights-operator 1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator 1913371 - Missing i18n key "Administrator" in namespace "console-app" and language "en." 1913386 - users can see metrics of namespaces for which they don't have rights when monitoring own services with prometheus user workloads 1913420 - Time duration setting of resources is not being displayed 1913536 - 4.6.9 -> 4.7 upgrade hangs. RHEL 7.9 worker stuck on "error enabling unit: Failed to execute operation: File exists\\n\" 1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase 1913560 - Normal user cannot load template on the new wizard 1913563 - "Virtual Machine" is not on the same line in create button when logged with normal user 1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table 1913568 - Normal user cannot create template 1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker 1913585 - Topology descriptive text fixes 1913608 - Table data contains data value None after change time range in graph and change back 1913651 - Improved Red Hat image and crashlooping OpenShift pod collection 1913660 - Change location and text of Pipeline edit flow alert 1913685 - OS field not disabled when creating a VM from a template 1913716 - Include additional use of existing libraries 1913725 - Refactor Insights Operator Plugin states 1913736 - Regression: fails to deploy computes when using root volumes 1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes 1913751 - add third-party network plugin test suite to openshift-tests 1913783 - QE-To fix the merging pr issue, commenting the afterEach() block 1913807 - Template support badge should not be shown for community supported templates 1913821 - Need definitive steps about uninstalling descheduler operator 1913851 - Cluster Tasks are not sorted in pipeline builder 1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists 1913951 - Update the Devfile Sample Repo to an Official Repo Host 1913960 - Cluster Autoscaler should use 1.20 dependencies 1913969 - Field dependency descriptor can sometimes cause an exception 1914060 - Disk created from 'Import via Registry' cannot be used as boot disk 1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy 1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks) 1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances 1914125 - Still using /dev/vde as default device path when create localvolume 1914183 - Empty NAD page is missing link to quickstarts 1914196 - target port infrom dockerfileflow does nothing 1914204 - Creating VM from dev perspective may fail with template not found error 1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets 1914212 - [e2e][automation] Add test to validate bootable disk souce 1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes 1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows 1914287 - Bring back selfLink 1914301 - User VM Template source should show the same provider as template itself 1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs 1914309 - /terminal page when WTO not installed shows nonsensical error 1914334 - order of getting started samples is arbitrary 1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel] timeout on s390x 1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI 1914405 - Quick search modal should be opened when coming back from a selection 1914407 - Its not clear that node-ca is running as non-root 1914427 - Count of pods on the dashboard is incorrect 1914439 - Typo in SRIOV port create command example 1914451 - cluster-storage-operator pod running as root 1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true 1914642 - Customize Wizard Storage tab does not pass validation 1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling 1914793 - device names should not be translated 1914894 - Warn about using non-groupified api version 1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug 1914932 - Put correct resource name in relatedObjects 1914938 - PVC disk is not shown on customization wizard general tab 1914941 - VM Template rootdisk is not deleted after fetching default disk bus 1914975 - Collect logs from openshift-sdn namespace 1915003 - No estimate of average node readiness during lifetime of a cluster 1915027 - fix MCS blocking iptables rules 1915041 - s3:ListMultipartUploadParts is relied on implicitly 1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons 1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours 1915085 - Pods created and rapidly terminated get stuck 1915114 - [aws-c2s] worker machines are not create during install 1915133 - Missing default pinned nav items in dev perspective 1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource 1915187 - Remove the "Tech preview" tag in web-console for volumesnapshot 1915188 - Remove HostSubnet anonymization 1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment 1915217 - OKD payloads expect to be signed with production keys 1915220 - Remove dropdown workaround for user settings 1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure 1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod 1915277 - [e2e][automation]fix cdi upload form test 1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout 1915304 - Updating scheduling component builder & base images to be consistent with ART 1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node 1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection 1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod 1915357 - Dev Catalog doesn't load anything if virtualization operator is installed 1915379 - New template wizard should require provider and make support input a dropdown type 1915408 - Failure in operator-registry kind e2e test 1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation 1915460 - Cluster name size might affect installations 1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance 1915540 - Silent 4.7 RHCOS install failure on ppc64le 1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI) 1915582 - p&f: carry upstream pr 97860 1915594 - [e2e][automation] Improve test for disk validation 1915617 - Bump bootimage for various fixes 1915624 - "Please fill in the following field: Template provider" blocks customize wizard 1915627 - Translate Guided Tour text. 1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error 1915647 - Intermittent White screen when the connector dragged to revision 1915649 - "Template support" pop up is not a warning; checkbox text should be rephrased 1915654 - [e2e][automation] Add a verification for Afinity modal should hint "Matching node found" 1915661 - Can't run the 'oc adm prune' command in a pod 1915672 - Kuryr doesn't work with selfLink disabled. 1915674 - Golden image PVC creation - storage size should be taken from the template 1915685 - Message for not supported template is not clear enough 1915760 - Need to increase timeout to wait rhel worker get ready 1915793 - quick starts panel syncs incorrectly across browser windows 1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster 1915818 - vsphere-problem-detector: use "_totals" in metrics 1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol 1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version 1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0 1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics 1915885 - Kuryr doesn't support workers running on multiple subnets 1915898 - TaskRun log output shows "undefined" in streaming 1915907 - test/cmd/builds.sh uses docker.io 1915912 - sig-storage-csi-snapshotter image not available 1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder & base images to be consistent with ART 1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard 1915939 - Resizing the browser window removes Web Terminal Icon 1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] 1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7 1915962 - ROKS: manifest with machine health check fails to apply in 4.7 1915972 - Global configuration breadcrumbs do not work as expected 1915981 - Install ethtool and conntrack in container for debugging 1915995 - "Edit RoleBinding Subject" action under RoleBinding list page kebab actions causes unhandled exception 1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups 1916021 - OLM enters infinite loop if Pending CSV replaces itself 1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry 1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert's annotations 1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk 1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration 1916145 - Explicitly set minimum versions of python libraries 1916164 - Update csi-driver-nfs builder & base images to be consistent with ART 1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7 1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third 1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2 1916379 - error metrics from vsphere-problem-detector should be gauge 1916382 - Can't create ext4 filesystems with Ignition 1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving 'verified: false' even for verified updates 1916401 - Deleting an ingress controller with a bad DNS Record hangs 1916417 - [Kuryr] Must-gather does not have all Custom Resources information 1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image 1916454 - teach CCO about upgradeability from 4.6 to 4.7 1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation 1916502 - Boot disk mirroring fails with mdadm error 1916524 - Two rootdisk shows on storage step 1916580 - Default yaml is broken for VM and VM template 1916621 - oc adm node-logs examples are wrong 1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. 1916692 - Possibly fails to destroy LB and thus cluster 1916711 - Update Kube dependencies in MCO to 1.20.0 1916747 - remove links to quick starts if virtualization operator isn't updated to 2.6 1916764 - editing a workload with no application applied, will auto fill the app 1916834 - Pipeline Metrics - Text Updates 1916843 - collect logs from openshift-sdn-controller pod 1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed 1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually 1916888 - OCS wizard Donor chart does not get updated whenDevice Typeis edited 1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error "Forbidden: cannot specify lbFloatingIP and apiFloatingIP together" 1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace 1917101 - [UPI on oVirt] - 'RHCOS image' topic isn't located in the right place in UPI document 1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to '"ProxyConfigController" controller failed to sync "key"' error 1917117 - Common templates - disks screen: invalid disk name 1917124 - Custom template - clone existing PVC - the name of the target VM's data volume is hard-coded; only one VM can be created 1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator 1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. 1917148 - [oVirt] Consume 23-10 ovirt sdk 1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened 1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console 1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory 1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7 1917327 - annotations.message maybe wrong for NTOPodsNotReady alert 1917367 - Refactor periodic.go 1917371 - Add docs on how to use the built-in profiler 1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console 1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui 1917484 - [BM][IPI] Failed to scale down machineset 1917522 - Deprecate --filter-by-os in oc adm catalog mirror 1917537 - controllers continuously busy reconciling operator 1917551 - use min_over_time for vsphere prometheus alerts 1917585 - OLM Operator install page missing i18n 1917587 - Manila CSI operator becomes degraded if user doesn't have permissions to list share types 1917605 - Deleting an exgw causes pods to no longer route to other exgws 1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API 1917656 - Add to Project/application for eventSources from topology shows 404 1917658 - Show TP badge for sources powered by camel connectors in create flow 1917660 - Editing parallelism of job get error info 1917678 - Could not provision pv when no symlink and target found on rhel worker 1917679 - Hide double CTA in admin pipelineruns tab 1917683 -NodeTextFileCollectorScrapeErroralert in OCP 4.6 cluster. 1917759 - Console operator panics after setting plugin that does not exists to the console-operator config 1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0 1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0 1917799 - Gather s list of names and versions of installed OLM operators 1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error 1917814 - Show Broker create option in eventing under admin perspective 1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1917872 - [oVirt] rebase on latest SDK 2021-01-12 1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image 1917938 - upgrade version of dnsmasq package 1917942 - Canary controller causes panic in ingress-operator 1918019 - Undesired scrollbars in markdown area of QuickStart 1918068 - Flaky olm integration tests 1918085 - reversed name of job and namespace in cvo log 1918112 - Flavor is not editable if a customize VM is created from cli 1918129 - Update IO sample archive with missing resources & remove IP anonymization from clusteroperator resources 1918132 - i18n: Volume Snapshot Contents menu is not translated 1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2 1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn't be installed on OSP 1918153 - When&character is set as an environment variable in a build config it is getting converted as\u00261918185 - Capitalization on PLR details page 1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections 1918318 - Kamelet connector's are not shown in eventing section under Admin perspective 1918351 - Gather SAP configuration (SCC & ClusterRoleBinding) 1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews 1918395 - [ovirt] increase livenessProbe period 1918415 - MCD nil pointer on dropins 1918438 - [ja_JP, zh_CN] Serverless i18n misses 1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig 1918471 - CustomNoUpgrade Feature gates are not working correctly 1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk 1918622 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1918623 - Updating ose-jenkins-agent-nodejs-12 builder & base images to be consistent with ART 1918625 - Updating ose-jenkins-agent-nodejs-10 builder & base images to be consistent with ART 1918635 - Updating openshift-jenkins-2 builder & base images to be consistent with ART #1197 1918639 - Event listener with triggerRef crashes the console 1918648 - Subscription page doesn't show InstallPlan correctly 1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack 1918748 - helmchartrepo is not http(s)_proxy-aware 1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI 1918803 - Need dedicated details page w/ global config breadcrumbs for 'KnativeServing' plugin 1918826 - Insights popover icons are not horizontally aligned 1918879 - need better debug for bad pull secrets 1918958 - The default NMstate instance from the operator is incorrect 1919097 - Close bracket ")" missing at the end of the sentence in the UI 1919231 - quick search modal cut off on smaller screens 1919259 - Make "Add x" singular in Pipeline Builder 1919260 - VM Template list actions should not wrap 1919271 - NM prepender script doesn't support systemd-resolved 1919341 - Updating ose-jenkins-agent-maven builder & base images to be consistent with ART 1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry 1919379 - dotnet logo out of date 1919387 - Console login fails with no error when it can't write to localStorage 1919396 - A11y Violation: svg-img-alt on Pod Status ring 1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren't verified 1919750 - Search InstallPlans got Minified React error 1919778 - Upgrade is stuck in insights operator Degraded with "Source clusterconfig could not be retrieved" until insights operator pod is manually deleted 1919823 - OCP 4.7 Internationalization Chinese tranlate issue 1919851 - Visualization does not render when Pipeline & Task share same name 1919862 - The tip information foroc new-project --skip-config-writeis wrong 1919876 - VM created via customize wizard cannot inherit template's PVC attributes 1919877 - Click on KSVC breaks with white screen 1919879 - The toolbox container name is changed from 'toolbox-root' to 'toolbox-' in a chroot environment 1919945 - user entered name value overridden by default value when selecting a git repository 1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference 1919970 - NTO does not update when the tuned profile is updated. 1919999 - Bump Cluster Resource Operator Golang Versions 1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration 1920200 - user-settings network error results in infinite loop of requests 1920205 - operator-registry e2e tests not working properly 1920214 - Bump golang to 1.15 in cluster-resource-override-admission 1920248 - re-running the pipelinerun with pipelinespec crashes the UI 1920320 - VM template field is "Not available" if it's created from common template 1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode isDisk Mode1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs 1920390 - Monitoring > Metrics graph shifts to the left when clicking the "Stacked" option and when toggling data series lines on / off 1920426 - Egress Router CNI OWNERS file should have ovn-k team members 1920427 - Need to updateoc loginhelp page since we don't support prompt interactively for the username 1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time 1920438 - openshift-tuned panics on turning debugging on/off. 1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn 1920481 - kuryr-cni pods using unreasonable amount of CPU 1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof 1920524 - Topology graph crashes adding Open Data Hub operator 1920526 - catalog operator causing CPU spikes and bad etcd performance 1920551 - Boot Order is not editable for Templates in "openshift" namespace 1920555 - bump cluster-resource-override-admission api dependencies 1920571 - fcp multipath will not recover failed paths automatically 1920619 - Remove default scheduler profile value 1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present 1920674 - MissingKey errors in bindings namespace 1920684 - Text in language preferences modal is misleading 1920695 - CI is broken because of bad image registry reference in the Makefile 1920756 - update generic-admission-server library to get the system:masters authorization optimization 1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for "network-check-target" failed when "defaultNodeSelector" is set 1920771 - i18n: Delete persistent volume claim drop down is not translated 1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI 1920912 - Unable to power off BMH from console 1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by "2" 1920984 - [e2e][automation] some menu items names are out dated 1921013 - Gather PersistentVolume definition (if any) used in image registry config 1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior) 1921087 - 'start next quick start' link doesn't work and is unintuitive 1921088 - test-cmd is failing on volumes.sh pretty consistently 1921248 - Clarify the kubelet configuration cr description 1921253 - Text filter default placeholder text not internationalized 1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window 1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo 1921277 - Fix Warning and Info log statements to handle arguments 1921281 - oc get -o yaml --export returns "error: unknown flag: --export" 1921458 - [SDK] Gracefully handle therun bundle-upgradeif the lower version operator doesn't exist 1921556 - [OCS with Vault]: OCS pods didn't comeup after deploying with Vault details from UI 1921572 - For external source (i.e GitHub Source) form view as well shows yaml 1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass 1921610 - Pipeline metrics font size inconsistency 1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921655 - [OSP] Incorrect error handling during cloudinfo generation 1921713 - [e2e][automation] fix failing VM migration tests 1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view 1921774 - delete application modal errors when a resource cannot be found 1921806 - Explore page APIResourceLinks aren't i18ned 1921823 - CheckBoxControls not internationalized 1921836 - AccessTableRows don't internationalize "User" or "Group" 1921857 - Test flake when hitting router in e2e tests due to one router not being up to date 1921880 - Dynamic plugins are not initialized on console load in production mode 1921911 - Installer PR #4589 is causing leak of IAM role policy bindings 1921921 - "Global Configuration" breadcrumb does not use sentence case 1921949 - Console bug - source code URL broken for gitlab self-hosted repositories 1921954 - Subscription-related constraints in ResolutionFailed events are misleading 1922015 - buttons in modal header are invisible on Safari 1922021 - Nodes terminal page 'Expand' 'Collapse' button not translated 1922050 - [e2e][automation] Improve vm clone tests 1922066 - Cannot create VM from custom template which has extra disk 1922098 - Namespace selection dialog is not closed after select a namespace 1922099 - Updated Readme documentation for QE code review and setup 1922146 - Egress Router CNI doesn't have logging support. 1922267 - Collect specific ADFS error 1922292 - Bump RHCOS boot images for 4.7 1922454 - CRI-O doesn't enable pprof by default 1922473 - reconcile LSO images for 4.8 1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace 1922782 - Source registry missing docker:// in yaml 1922907 - Interop UI Tests - step implementation for updating feature files 1922911 - Page crash when click the "Stacked" checkbox after clicking the data series toggle buttons 1922991 - "verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build" test fails on OKD 1923003 - WebConsole Insights widget showing "Issues pending" when the cluster doesn't report anything 1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources 1923102 - [vsphere-problem-detector-operator] pod's version is not correct 1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot 1923674 - k8s 1.20 vendor dependencies 1923721 - PipelineRun running status icon is not rotating 1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios 1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator 1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator 1923874 - Unable to specify values with % in kubeletconfig 1923888 - Fixes error metadata gathering 1923892 - Update arch.md after refactor. 1923894 - "installed" operator status in operatorhub page does not reflect the real status of operator 1923895 - Changelog generation. 1923911 - [e2e][automation] Improve tests for vm details page and list filter 1923945 - PVC Name and Namespace resets when user changes os/flavor/workload 1923951 - EventSources showsundefined` in project 1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins 1924046 - Localhost: Refreshing on a Project removes it from nav item urls 1924078 - Topology quick search View all results footer should be sticky. 1924081 - NTO should ship the latest Tuned daemon release 2.15 1924084 - backend tests incorrectly hard-code artifacts dir 1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build 1924135 - Under sufficient load, CRI-O may segfault 1924143 - Code Editor Decorator url is broken for Bitbucket repos 1924188 - Language selector dropdown doesn't always pre-select the language 1924365 - Add extra disk for VM which use boot source PXE 1924383 - Degraded network operator during upgrade to 4.7.z 1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. 1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 1924583 - Deprectaed templates are listed in the Templates screen 1924870 - pick upstream pr#96901: plumb context with request deadline 1924955 - Images from Private external registry not working in deploy Image 1924961 - k8sutil.TrimDNS1123Label creates invalid values 1924985 - Build egress-router-cni for both RHEL 7 and 8 1925020 - Console demo plugin deployment image shoult not point to dockerhub 1925024 - Remove extra validations on kafka source form view net section 1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running 1925072 - NTO needs to ship the current latest stalld v1.7.0 1925163 - Missing info about dev catalog in boot source template column 1925200 - Monitoring Alert icon is missing on the workload in Topology view 1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1 1925319 - bash syntax error in configure-ovs.sh script 1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data 1925516 - Pipeline Metrics Tooltips are overlapping data 1925562 - Add new ArgoCD link from GitOps application environments page 1925596 - Gitops details page image and commit id text overflows past card boundary 1926556 - 'excessive etcd leader changes' test case failing in serial job because prometheus data is wiped by machine set test 1926588 - The tarball of operator-sdk is not ready for ocp4.7 1927456 - 4.7 still points to 4.6 catalog images 1927500 - API server exits non-zero on 2 SIGTERM signals 1929278 - Monitoring workloads using too high a priorityclass 1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 1929920 - Cluster monitoring documentation link is broken - 404 not found

  1. References:

https://access.redhat.com/security/cve/CVE-2018-10103 https://access.redhat.com/security/cve/CVE-2018-10105 https://access.redhat.com/security/cve/CVE-2018-14461 https://access.redhat.com/security/cve/CVE-2018-14462 https://access.redhat.com/security/cve/CVE-2018-14463 https://access.redhat.com/security/cve/CVE-2018-14464 https://access.redhat.com/security/cve/CVE-2018-14465 https://access.redhat.com/security/cve/CVE-2018-14466 https://access.redhat.com/security/cve/CVE-2018-14467 https://access.redhat.com/security/cve/CVE-2018-14468 https://access.redhat.com/security/cve/CVE-2018-14469 https://access.redhat.com/security/cve/CVE-2018-14470 https://access.redhat.com/security/cve/CVE-2018-14553 https://access.redhat.com/security/cve/CVE-2018-14879 https://access.redhat.com/security/cve/CVE-2018-14880 https://access.redhat.com/security/cve/CVE-2018-14881 https://access.redhat.com/security/cve/CVE-2018-14882 https://access.redhat.com/security/cve/CVE-2018-16227 https://access.redhat.com/security/cve/CVE-2018-16228 https://access.redhat.com/security/cve/CVE-2018-16229 https://access.redhat.com/security/cve/CVE-2018-16230 https://access.redhat.com/security/cve/CVE-2018-16300 https://access.redhat.com/security/cve/CVE-2018-16451 https://access.redhat.com/security/cve/CVE-2018-16452 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2019-3884 https://access.redhat.com/security/cve/CVE-2019-5018 https://access.redhat.com/security/cve/CVE-2019-6977 https://access.redhat.com/security/cve/CVE-2019-6978 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9455 https://access.redhat.com/security/cve/CVE-2019-9458 https://access.redhat.com/security/cve/CVE-2019-11068 https://access.redhat.com/security/cve/CVE-2019-12614 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13225 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15165 https://access.redhat.com/security/cve/CVE-2019-15166 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-15917 https://access.redhat.com/security/cve/CVE-2019-15925 https://access.redhat.com/security/cve/CVE-2019-16167 https://access.redhat.com/security/cve/CVE-2019-16168 https://access.redhat.com/security/cve/CVE-2019-16231 https://access.redhat.com/security/cve/CVE-2019-16233 https://access.redhat.com/security/cve/CVE-2019-16935 https://access.redhat.com/security/cve/CVE-2019-17450 https://access.redhat.com/security/cve/CVE-2019-17546 https://access.redhat.com/security/cve/CVE-2019-18197 https://access.redhat.com/security/cve/CVE-2019-18808 https://access.redhat.com/security/cve/CVE-2019-18809 https://access.redhat.com/security/cve/CVE-2019-19046 https://access.redhat.com/security/cve/CVE-2019-19056 https://access.redhat.com/security/cve/CVE-2019-19062 https://access.redhat.com/security/cve/CVE-2019-19063 https://access.redhat.com/security/cve/CVE-2019-19068 https://access.redhat.com/security/cve/CVE-2019-19072 https://access.redhat.com/security/cve/CVE-2019-19221 https://access.redhat.com/security/cve/CVE-2019-19319 https://access.redhat.com/security/cve/CVE-2019-19332 https://access.redhat.com/security/cve/CVE-2019-19447 https://access.redhat.com/security/cve/CVE-2019-19524 https://access.redhat.com/security/cve/CVE-2019-19533 https://access.redhat.com/security/cve/CVE-2019-19537 https://access.redhat.com/security/cve/CVE-2019-19543 https://access.redhat.com/security/cve/CVE-2019-19602 https://access.redhat.com/security/cve/CVE-2019-19767 https://access.redhat.com/security/cve/CVE-2019-19770 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-19956 https://access.redhat.com/security/cve/CVE-2019-20054 https://access.redhat.com/security/cve/CVE-2019-20218 https://access.redhat.com/security/cve/CVE-2019-20386 https://access.redhat.com/security/cve/CVE-2019-20387 https://access.redhat.com/security/cve/CVE-2019-20388 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20636 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-20812 https://access.redhat.com/security/cve/CVE-2019-20907 https://access.redhat.com/security/cve/CVE-2019-20916 https://access.redhat.com/security/cve/CVE-2020-0305 https://access.redhat.com/security/cve/CVE-2020-0444 https://access.redhat.com/security/cve/CVE-2020-1716 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-1751 https://access.redhat.com/security/cve/CVE-2020-1752 https://access.redhat.com/security/cve/CVE-2020-1971 https://access.redhat.com/security/cve/CVE-2020-2574 https://access.redhat.com/security/cve/CVE-2020-2752 https://access.redhat.com/security/cve/CVE-2020-2922 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3898 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-6405 https://access.redhat.com/security/cve/CVE-2020-7595 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-8177 https://access.redhat.com/security/cve/CVE-2020-8492 https://access.redhat.com/security/cve/CVE-2020-8563 https://access.redhat.com/security/cve/CVE-2020-8566 https://access.redhat.com/security/cve/CVE-2020-8619 https://access.redhat.com/security/cve/CVE-2020-8622 https://access.redhat.com/security/cve/CVE-2020-8623 https://access.redhat.com/security/cve/CVE-2020-8624 https://access.redhat.com/security/cve/CVE-2020-8647 https://access.redhat.com/security/cve/CVE-2020-8648 https://access.redhat.com/security/cve/CVE-2020-8649 https://access.redhat.com/security/cve/CVE-2020-9327 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-10029 https://access.redhat.com/security/cve/CVE-2020-10732 https://access.redhat.com/security/cve/CVE-2020-10749 https://access.redhat.com/security/cve/CVE-2020-10751 https://access.redhat.com/security/cve/CVE-2020-10763 https://access.redhat.com/security/cve/CVE-2020-10773 https://access.redhat.com/security/cve/CVE-2020-10774 https://access.redhat.com/security/cve/CVE-2020-10942 https://access.redhat.com/security/cve/CVE-2020-11565 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-12465 https://access.redhat.com/security/cve/CVE-2020-12655 https://access.redhat.com/security/cve/CVE-2020-12659 https://access.redhat.com/security/cve/CVE-2020-12770 https://access.redhat.com/security/cve/CVE-2020-12826 https://access.redhat.com/security/cve/CVE-2020-13249 https://access.redhat.com/security/cve/CVE-2020-13630 https://access.redhat.com/security/cve/CVE-2020-13631 https://access.redhat.com/security/cve/CVE-2020-13632 https://access.redhat.com/security/cve/CVE-2020-14019 https://access.redhat.com/security/cve/CVE-2020-14040 https://access.redhat.com/security/cve/CVE-2020-14381 https://access.redhat.com/security/cve/CVE-2020-14382 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-14422 https://access.redhat.com/security/cve/CVE-2020-15157 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-15862 https://access.redhat.com/security/cve/CVE-2020-15999 https://access.redhat.com/security/cve/CVE-2020-16166 https://access.redhat.com/security/cve/CVE-2020-24490 https://access.redhat.com/security/cve/CVE-2020-24659 https://access.redhat.com/security/cve/CVE-2020-25211 https://access.redhat.com/security/cve/CVE-2020-25641 https://access.redhat.com/security/cve/CVE-2020-25658 https://access.redhat.com/security/cve/CVE-2020-25661 https://access.redhat.com/security/cve/CVE-2020-25662 https://access.redhat.com/security/cve/CVE-2020-25681 https://access.redhat.com/security/cve/CVE-2020-25682 https://access.redhat.com/security/cve/CVE-2020-25683 https://access.redhat.com/security/cve/CVE-2020-25684 https://access.redhat.com/security/cve/CVE-2020-25685 https://access.redhat.com/security/cve/CVE-2020-25686 https://access.redhat.com/security/cve/CVE-2020-25687 https://access.redhat.com/security/cve/CVE-2020-25694 https://access.redhat.com/security/cve/CVE-2020-25696 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-27813 https://access.redhat.com/security/cve/CVE-2020-27846 https://access.redhat.com/security/cve/CVE-2020-28362 https://access.redhat.com/security/cve/CVE-2020-29652 https://access.redhat.com/security/cve/CVE-2021-2007 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T lmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H EmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8 4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4 mWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL ISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy Ae5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk 4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM uR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG krzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv RjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6 McvuEaxco7U= =sw8i -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .

This advisory provides the following updates among others:

  • Enhances profile parsing time.
  • Fixes excessive resource consumption from the Operator.
  • Fixes default content image.
  • Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):

1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers 1918990 - ComplianceSuite scans use quay content image for initContainer 1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present 1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules 1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console.

Bug Fix(es):

  • Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)

  • The compliancesuite object returns error with ocp4-cis tailored profile (BZ#1902251)

  • The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object (BZ#1902634)

  • [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object (BZ#1907414)

  • The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator (BZ#1908991)

  • Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" (BZ#1909081)

  • [OCP v46] Always update the default profilebundles on Compliance operator startup (BZ#1909122)

  • Bugs fixed (https://bugzilla.redhat.com/):

1899479 - Aggregator pod tries to parse ConfigMaps without results 1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service 1902251 - The compliancesuite object returns error with ocp4-cis tailored profile 1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object 1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object 1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator 1909081 - Applying the "rhcos4-moderate" compliance profile leads to Ignition error "something else exists at that path" 1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-* 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - noarch

  1. Description:

WebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+ platform. These packages provide WebKitGTK+ for GTK+ 3.

The following packages have been upgraded to a later upstream version: webkitgtk4 (2.28.2).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 7.9 Release Notes linked from the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

x86_64: webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

ppc64: webkitgtk4-2.28.2-2.el7.ppc.rpm webkitgtk4-2.28.2-2.el7.ppc64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm

ppc64le: webkitgtk4-2.28.2-2.el7.ppc64le.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm

s390x: webkitgtk4-2.28.2-2.el7.s390.rpm webkitgtk4-2.28.2-2.el7.s390x.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-2.28.2-2.el7.s390x.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

noarch: webkitgtk4-doc-2.28.2-2.el7.noarch.rpm

ppc64: webkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm webkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm webkitgtk4-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-devel-2.28.2-2.el7.ppc64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm

s390x: webkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm webkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm webkitgtk4-devel-2.28.2-2.el7.s390.rpm webkitgtk4-devel-2.28.2-2.el7.s390x.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: webkitgtk4-2.28.2-2.el7.src.rpm

x86_64: webkitgtk4-2.28.2-2.el7.i686.rpm webkitgtk4-2.28.2-2.el7.x86_64.rpm webkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm webkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm webkitgtk4-devel-2.28.2-2.el7.i686.rpm webkitgtk4-devel-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm webkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Bugs fixed (https://bugzilla.redhat.com/):

1207179 - Select items matching non existing pattern does not unselect already selected 1566027 - can't correctly compute contents size if hidden files are included 1569868 - Browsing samba shares using gvfs is very slow 1652178 - [RFE] perf-tool run on wayland 1656262 - The terminal's character display is unclear on rhel8 guest after installing gnome 1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled 1692536 - login screen shows after gnome-initial-setup 1706008 - Sound Effect sometimes fails to change to selected option. 1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. 1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined 1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly 1758891 - tracker-devel subpackage missing from el8 repos 1775345 - Rebase xdg-desktop-portal to 1.6 1778579 - Nautilus does not respect umask settings. 1779691 - Rebase xdg-desktop-portal-gtk to 1.6 1794045 - There are two different high contrast versions of desktop icons 1804719 - Update vte291 to 0.52.4 1805929 - RHEL 8.1 gnome-shell-extension errors 1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp 1814820 - No checkbox to install updates in the shutdown dialog 1816070 - "search for an application to open this file" dialog broken 1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution 1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution 1817143 - Rebase WebKitGTK to 2.28 1820759 - Include IO stall fixes 1820760 - Include IO fixes 1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening 1827030 - gnome-settings-daemon: subscription notification on CentOS Stream 1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content 1832347 - [Rebase] Rebase pipewire to 0.3.x 1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install 1837381 - Backport screen cast improvements to 8.3 1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version 1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6 1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113 1840080 - Can not control top bar menus via keys in Wayland 1840788 - [flatpak][rhel8] unable to build potrace as dependency 1843486 - Software crash after clicking Updates tab 1844578 - anaconda very rarely crashes at startup with a pygobject traceback 1846191 - usb adapters hotplug crashes gnome-shell 1847051 - JS ERROR: TypeError: area is null 1847061 - File search doesn't work under certain locales 1847062 - gnome-remote-desktop crash on QXL graphics 1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV 1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow 1854734 - PipeWire 0.2 should be required by xdg-desktop-portal 1866332 - Remove obsolete libusb-devel dependency 1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at "Started GNOME Display Manager" - GDM regression issue

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202004-1972",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "icloud",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "10.9.3"
      },
      {
        "model": "itunes",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.10.5"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.1"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.4"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.4"
      },
      {
        "model": "ipad os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "13.4"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1 \u672a\u6e80 (macos high sierra)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1 \u672a\u6e80 (macos mojave)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4 \u672a\u6e80 (ipod touch \u7b2c 7 \u4e16\u4ee3)"
      },
      {
        "model": "safari",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.1 \u672a\u6e80 (macos catalina)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4 \u672a\u6e80 (ipad air 2 \u4ee5\u964d)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 7.18 \u672a\u6e80 (windows 10 \u4ee5\u964d)"
      },
      {
        "model": "icloud",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 10.9.3 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "itunes",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "for windows 12.10.5 \u672a\u6e80 (windows 7 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4 \u672a\u6e80 (apple tv 4k)"
      },
      {
        "model": "ipados",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4 \u672a\u6e80 (ipad mini 4 \u4ee5\u964d)"
      },
      {
        "model": "tvos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4 \u672a\u6e80 (apple tv hd)"
      },
      {
        "model": "ios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "apple",
        "version": "13.4  \u672a\u6e80 (iphone 6s \u4ee5\u964d)"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "cpe_match": [
              {
                "cpe22Uri": "cpe:/a:apple:icloud",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:iphone_os",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:ipados",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:itunes",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/a:apple:safari",
                "vulnerable": true
              },
              {
                "cpe22Uri": "cpe:/o:apple:apple_tv",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 0.8
  },
  "cve": "CVE-2020-3894",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "HIGH",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 2.6,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 4.9,
            "id": "CVE-2020-3894",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "LOW",
            "trust": 1.1,
            "vectorString": "AV:N/AC:H/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "High",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 2.6,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-003664",
            "impactScore": null,
            "integrityImpact": "None",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Low",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:H/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "HIGH",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 2.6,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 4.9,
            "id": "VHN-182019",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "LOW",
            "trust": 0.1,
            "vectorString": "AV:N/AC:H/AU:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 3.1,
            "baseSeverity": "LOW",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 1.6,
            "id": "CVE-2020-3894",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 3.1,
            "baseSeverity": "Low",
            "confidentialityImpact": "Low",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-003664",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-3894",
            "trust": 1.0,
            "value": "LOW"
          },
          {
            "author": "NVD",
            "id": "JVNDB-2020-003664",
            "trust": 0.8,
            "value": "Low"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202003-1562",
            "trust": 0.6,
            "value": "LOW"
          },
          {
            "author": "VULHUB",
            "id": "VHN-182019",
            "trust": 0.1,
            "value": "LOW"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-3894",
            "trust": 0.1,
            "value": "LOW"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A race condition was addressed with additional validation. This issue is fixed in iOS 13.4 and iPadOS 13.4, tvOS 13.4, Safari 13.1, iTunes for Windows 12.10.5, iCloud for Windows 10.9.3, iCloud for Windows 7.18. An application may be able to read restricted memory. plural Apple The product is vulnerable to a race condition due to a flawed validation process.You may be able to read the limited memory. Apple iOS, etc. are all products of Apple (Apple). Apple iOS is an operating system developed for mobile devices. Apple tvOS is a smart TV operating system. Apple iPadOS is an operating system for iPad tablets. A race condition vulnerability exists in the WebKit component of several Apple products. The following products and versions are affected: Windows-based Apple iCloud versions prior to 7.18 and 10.9.3; Windows-based iTunes versions prior to 12.10.5; iOS versions prior to 13.4; iPadOS versions prior to 13.4; Safari versions prior to 13.1; tvOS Versions prior to 13.4. WebKitGTK and WPE WebKit prior to version 2.24.1 failed to properly apply configured HTTP proxy settings when downloading livestream video (HLS, DASH, or Smooth Streaming), an error resulting in deanonymization. This issue was corrected by changing the way livestreams are downloaded. (CVE-2019-11070)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-6237)\nWebKitGTK and WPE WebKit prior to version 2.24.1 are vulnerable to address bar spoofing upon certain JavaScript redirections. An attacker could cause malicious web content to be displayed as if for a trusted URI. This is similar to the CVE-2018-8383 issue in Microsoft Edge. (CVE-2019-6251)\nA type confusion issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8506)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8524)\nA memory corruption issue was addressed with improved state management. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8535)\nA memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8536)\nA memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8551)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8558)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8559)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8563)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8571)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8583)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8584)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8586)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8587)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8594)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8595)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8596)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8597)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may result in the disclosure of process memory. (CVE-2019-8607)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8608)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8609)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8610)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8611)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8615)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8619)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8622)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8623)\nA logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8625)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8644)\nA logic issue existed in the handling of synchronous page loads. This issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8649)\nA logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8658)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8666)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8669)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8671)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8672)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8673)\nA logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8674)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8676)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8677)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8678)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8679)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8680)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8681)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8683)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8684)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8686)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8687)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8688)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8689)\nA logic issue existed in the handling of document loads. This issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8690)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8707)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8710)\nA logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8719)\nThis fixes a remote code execution in webkitgtk4. No further details are available in NIST. (CVE-2019-8720)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8726)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8733)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8735)\nMultiple memory corruption issues were addressed with improved memory handling. This issue is fixed in watchOS 6.1. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8743)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8763)\nA logic issue was addressed with improved state management. This issue is fixed in watchOS 6.1. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8764)\nMultiple memory corruption issues were addressed with improved memory handling. This issue is fixed in watchOS 6.1. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8765)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8766)\n\"Clear History and Website Data\" did not clear the history. The issue was addressed with improved data deletion. This issue is fixed in macOS Catalina 10.15. A user may be unable to delete browsing history items. (CVE-2019-8768)\nAn issue existed in the drawing of web page elements. The issue was addressed with improved logic. This issue is fixed in iOS 13.1 and iPadOS 13.1, macOS Catalina 10.15. Visiting a maliciously crafted website may reveal browsing history. (CVE-2019-8769)\nThis issue was addressed with improved iframe sandbox enforcement. Maliciously crafted web content may violate iframe sandboxing policy. (CVE-2019-8771)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8782)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8783)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8808)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8811)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8812)\nA logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2019-8813)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8814)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8815)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8816)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8819)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8820)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8821)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8822)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8823)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8835)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8844)\nA use after free issue was addressed with improved memory management. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2019-8846)\nWebKitGTK up to and including 2.26.4 and WPE WebKit up to and including 2.26.4 (which are the versions right prior to 2.28.0) contains a memory corruption issue (use-after-free) that may lead to arbitrary code execution. This issue has been fixed in 2.28.0 with improved memory handling. (CVE-2020-10018)\nA use-after-free flaw exists in WebKitGTK. This flaw allows remote malicious users to execute arbitrary code or cause a denial of service. (CVE-2020-11793)\nA denial of service issue was addressed with improved memory handling. A malicious website may be able to cause a denial of service. A DOM object context may not have had a unique security origin. (CVE-2020-3864)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3865)\nA logic issue was addressed with improved state management. Processing maliciously crafted web content may lead to universal cross site scripting. (CVE-2020-3867)\nMultiple memory corruption issues were addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3868)\nA logic issue was addressed with improved restrictions. A file URL may be incorrectly processed. (CVE-2020-3894)\nA memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3895)\nA type confusion issue was addressed with improved memory handling. A remote attacker may be able to cause arbitrary code execution. (CVE-2020-3897)\nA memory consumption issue was addressed with improved memory handling. A remote attacker may be able to cause arbitrary code execution. (CVE-2020-3899)\nA memory corruption issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. (CVE-2020-3900)\nA type confusion issue was addressed with improved memory handling. Processing maliciously crafted web content may lead to arbitrary code execution. Processing maliciously crafted web content may lead to a cross site scripting attack. (CVE-2020-3902). Description:\n\nService Telemetry Framework (STF) provides automated collection of\nmeasurements and data from remote clients, such as Red Hat OpenStack\nPlatform or third-party nodes. \nDockerfiles and scripts should be amended either to refer to this new image\nspecifically, or to the latest image generally. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. In addition to persistent storage, Red Hat\nOpenShift Container Storage provisions a multicloud data management service\nwith an S3 compatible API. \n\nThese updated images include numerous security fixes, bug fixes, and\nenhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1806266 - Require an extension to the cephfs subvolume commands, that can return metadata regarding a subvolume\n1813506 - Dockerfile not  compatible with docker and buildah\n1817438 - OSDs not distributed uniformly across OCS nodes on a 9-node AWS IPI setup\n1817850 - [BAREMETAL] rook-ceph-operator does not reconcile when osd deployment is deleted when performed node replacement\n1827157 - OSD hitting default CPU limit on AWS i3en.2xlarge instances limiting performance\n1829055 - [RFE] add insecureEdgeTerminationPolicy: Redirect to noobaa mgmt route (http to https)\n1833153 - add a variable for sleep time of rook operator between checks of downed OSD+Node. \n1836299 - NooBaa Operator deploys with HPA that fires maxreplicas alerts by default\n1842254 - [NooBaa] Compression stats do not add up when compression id disabled\n1845976 - OCS 4.5 Independent mode: must-gather commands fails to collect ceph command outputs from external cluster\n1849771 - [RFE] Account created by OBC should have same permissions as bucket owner\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854500 - [tracker-rhcs bug 1838931] mgr/volumes: add command to return metadata of a subvolume snapshot\n1854501 - [Tracker-rhcs bug 1848494 ]pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes independent of the source subvolume\n1854503 - [tracker-rhcs-bug 1848503] cephfs: Provide alternatives to increase the total cephfs subvolume snapshot counts to greater than the current 400 across a Cephfs volume\n1856953 - CVE-2020-15586 golang: data race in certain net/http servers including ReverseProxy can lead to DoS\n1858195 - [GSS] registry pod stuck in ContainerCreating due to pvc from cephfs storage class fail to mount\n1859183 - PV expansion is failing in retry loop in pre-existing PV after upgrade to OCS 4.5 (i.e. if the PV spec does not contain expansion params)\n1859229 - Rook should delete extra MON PVCs in case first reconcile takes too long and rook skips \"b\" and \"c\" (spawned from Bug 1840084#c14)\n1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage\n1860022 - OCS 4.6 Deployment: LBP CSV and pod should not be deployed since ob/obc CRDs are owned from OCS 4.5 onwards\n1860034 - OCS 4.6 Deployment in ocs-ci : Toolbox pod in ContainerCreationError due to key admin-secret not found\n1860670 - OCS 4.5 Uninstall External: Openshift-storage namespace in Terminating state as CephObjectStoreUser had finalizers remaining\n1860848 - Add validation for rgw-pool-prefix in the ceph-external-cluster-details-exporter script\n1861780 - [Tracker BZ1866386][IBM s390x] Mount Failed for CEPH  while running couple of OCS test cases. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2020:5633-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2020:5633\nIssue date:        2021-02-24\nCVE Names:         CVE-2018-10103 CVE-2018-10105 CVE-2018-14461 \n                   CVE-2018-14462 CVE-2018-14463 CVE-2018-14464 \n                   CVE-2018-14465 CVE-2018-14466 CVE-2018-14467 \n                   CVE-2018-14468 CVE-2018-14469 CVE-2018-14470 \n                   CVE-2018-14553 CVE-2018-14879 CVE-2018-14880 \n                   CVE-2018-14881 CVE-2018-14882 CVE-2018-16227 \n                   CVE-2018-16228 CVE-2018-16229 CVE-2018-16230 \n                   CVE-2018-16300 CVE-2018-16451 CVE-2018-16452 \n                   CVE-2018-20843 CVE-2019-3884 CVE-2019-5018 \n                   CVE-2019-6977 CVE-2019-6978 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9455 CVE-2019-9458 \n                   CVE-2019-11068 CVE-2019-12614 CVE-2019-13050 \n                   CVE-2019-13225 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15165 CVE-2019-15166 CVE-2019-15903 \n                   CVE-2019-15917 CVE-2019-15925 CVE-2019-16167 \n                   CVE-2019-16168 CVE-2019-16231 CVE-2019-16233 \n                   CVE-2019-16935 CVE-2019-17450 CVE-2019-17546 \n                   CVE-2019-18197 CVE-2019-18808 CVE-2019-18809 \n                   CVE-2019-19046 CVE-2019-19056 CVE-2019-19062 \n                   CVE-2019-19063 CVE-2019-19068 CVE-2019-19072 \n                   CVE-2019-19221 CVE-2019-19319 CVE-2019-19332 \n                   CVE-2019-19447 CVE-2019-19524 CVE-2019-19533 \n                   CVE-2019-19537 CVE-2019-19543 CVE-2019-19602 \n                   CVE-2019-19767 CVE-2019-19770 CVE-2019-19906 \n                   CVE-2019-19956 CVE-2019-20054 CVE-2019-20218 \n                   CVE-2019-20386 CVE-2019-20387 CVE-2019-20388 \n                   CVE-2019-20454 CVE-2019-20636 CVE-2019-20807 \n                   CVE-2019-20812 CVE-2019-20907 CVE-2019-20916 \n                   CVE-2020-0305 CVE-2020-0444 CVE-2020-1716 \n                   CVE-2020-1730 CVE-2020-1751 CVE-2020-1752 \n                   CVE-2020-1971 CVE-2020-2574 CVE-2020-2752 \n                   CVE-2020-2922 CVE-2020-3862 CVE-2020-3864 \n                   CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 \n                   CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 \n                   CVE-2020-3897 CVE-2020-3898 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-6405 CVE-2020-7595 CVE-2020-7774 \n                   CVE-2020-8177 CVE-2020-8492 CVE-2020-8563 \n                   CVE-2020-8566 CVE-2020-8619 CVE-2020-8622 \n                   CVE-2020-8623 CVE-2020-8624 CVE-2020-8647 \n                   CVE-2020-8648 CVE-2020-8649 CVE-2020-9327 \n                   CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 \n                   CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 \n                   CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 \n                   CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 \n                   CVE-2020-9925 CVE-2020-10018 CVE-2020-10029 \n                   CVE-2020-10732 CVE-2020-10749 CVE-2020-10751 \n                   CVE-2020-10763 CVE-2020-10773 CVE-2020-10774 \n                   CVE-2020-10942 CVE-2020-11565 CVE-2020-11668 \n                   CVE-2020-11793 CVE-2020-12465 CVE-2020-12655 \n                   CVE-2020-12659 CVE-2020-12770 CVE-2020-12826 \n                   CVE-2020-13249 CVE-2020-13630 CVE-2020-13631 \n                   CVE-2020-13632 CVE-2020-14019 CVE-2020-14040 \n                   CVE-2020-14381 CVE-2020-14382 CVE-2020-14391 \n                   CVE-2020-14422 CVE-2020-15157 CVE-2020-15503 \n                   CVE-2020-15862 CVE-2020-15999 CVE-2020-16166 \n                   CVE-2020-24490 CVE-2020-24659 CVE-2020-25211 \n                   CVE-2020-25641 CVE-2020-25658 CVE-2020-25661 \n                   CVE-2020-25662 CVE-2020-25681 CVE-2020-25682 \n                   CVE-2020-25683 CVE-2020-25684 CVE-2020-25685 \n                   CVE-2020-25686 CVE-2020-25687 CVE-2020-25694 \n                   CVE-2020-25696 CVE-2020-26160 CVE-2020-27813 \n                   CVE-2020-27846 CVE-2020-28362 CVE-2020-29652 \n                   CVE-2021-2007 CVE-2021-3121 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.7.0 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.7.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2020:5634\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\n\nThe image digest is\nsha256:d74b1cfa81f8c9cc23336aee72d8ae9c9905e62c4874b071317a078c316f8a70\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-s390x\n\nThe image digest is\nsha256:a68ca03d87496ddfea0ac26b82af77231583a58a7836b95de85efe5e390ad45d\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.7.0-ppc64le\n\nThe image digest is\nsha256:bc7b04e038c8ff3a33b827f4ee19aa79b26e14c359a7dcc1ced9f3b58e5f1ac6\n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor. \n\nSecurity Fix(es):\n\n* crewjam/saml: authentication bypass in saml authentication\n(CVE-2020-27846)\n\n* golang: crypto/ssh: crafted authentication request can lead to nil\npointer dereference (CVE-2020-29652)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* kubernetes: Secret leaks in kube-controller-manager when using vSphere\nProvider (CVE-2020-8563)\n\n* containernetworking/plugins: IPv6 router advertisements allow for MitM\nattacks on IPv4 clusters (CVE-2020-10749)\n\n* heketi: gluster-block volume password details available in logs\n(CVE-2020-10763)\n\n* golang.org/x/text: possibility to trigger an infinite loop in\nencoding/unicode could lead to crash (CVE-2020-14040)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* golang-github-gorilla-websocket: integer overflow leads to denial of\nservice (CVE-2020-27813)\n\n* golang: math/big: panic during recursive division of very large numbers\n(CVE-2020-28362)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor OpenShift Container Platform 4.7, see the following documentation,\nwhich\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1620608 - Restoring deployment config with history leads to weird state\n1752220 - [OVN] Network Policy fails to work when project label gets overwritten\n1756096 - Local storage operator should implement must-gather spec\n1756173 - /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n1768255 - installer reports 100% complete but failing components\n1770017 - Init containers restart when the exited container is removed from node. \n1775057 - [MSTR-485] Cluster is abnormal after etcd backup/restore when the backup is conducted during etcd encryption is migrating\n1775444 - RFE: k8s cpu manager does not restrict /usr/bin/pod cpuset\n1777038 - Cluster scaled beyond host subnet limits does not fire alert or cleanly report why it cannot scale\n1777224 - InfraID in metadata.json and .openshift_install_state.json is not consistent when repeating `create` commands\n1784298 - \"Displaying with reduced resolution due to large dataset.\" would show under some conditions\n1785399 - Under condition of heavy pod creation, creation fails with \u0027error reserving pod name ...: name is reserved\"\n1797766 - Resource Requirements\" specDescriptor fields - CPU and Memory injects empty string YAML editor\n1801089 - [OVN] Installation failed and monitoring pod not created due to some network error. \n1805025 - [OSP] Machine status doesn\u0027t become \"Failed\" when creating a machine with invalid image\n1805639 - Machine status should be \"Failed\" when creating a machine with invalid machine configuration\n1806000 - CRI-O failing with: error reserving ctr name\n1806915 - openshift-service-ca: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1806917 - openshift-service-ca-operator: Some core components are in openshift.io/run-level 1 and are bypassing SCC, but should not be\n1810438 - Installation logs are not gathered from OCP nodes\n1812085 - kubernetes-networking-namespace-pods dashboard doesn\u0027t exist\n1812412 - Monitoring Dashboard: on restricted cluster, query timed out in expression evaluation\n1813012 - EtcdDiscoveryDomain no longer needed\n1813949 - openshift-install doesn\u0027t use env variables for OS_* for some of API endpoints\n1816812 - OpenShift test suites are not resilient to rate limited registries (like docker.io) and cannot control their dependencies for offline use\n1819053 - loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: OpenAPI spec does not exist\n1819457 - Package Server is in \u0027Cannot update\u0027 status despite properly working\n1820141 - [RFE] deploy qemu-quest-agent on the nodes\n1822744 - OCS Installation CI test flaking\n1824038 - Integration Tests: StaleElementReferenceError in OLM single-installmode scenario\n1825892 - StorageClasses and PVs are not cleaned completely after running the csi verification tool\n1826301 - Wrong NodeStatus reports in file-integrity scan when configuration error in aide.conf file\n1829723 - User workload monitoring alerts fire out of the box\n1832968 - oc adm catalog mirror does not mirror the index image itself\n1833012 - Lower OVNKubernetes HTTP E/W performance compared with OpenShiftSDN\n1833220 - CVE-2020-10749 containernetworking/plugins: IPv6 router advertisements allow for MitM attacks on IPv4 clusters\n1834995 - olmFull suite always fails once th suite is run on the same cluster\n1836017 - vSphere UPI: Both Internal and External load balancers for kube-apiserver should use /readyz\n1837953 - Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1838352 - OperatorExited, Pending marketplace-operator-... pod for several weeks\n1838751 - [oVirt][Tracker] Re-enable skipped network tests\n1839239 - csi-snapshot-controller flickers Degraded=True on etcd hiccups\n1840759 - [aws-ebs-csi-driver] The volume created by aws ebs csi driver can not be deleted when the cluster is destroyed\n1841039 - authentication-operator: Add e2e test for password grants to Keycloak being set as OIDC IdP\n1841119 - Get rid of config patches and pass flags directly to kcm\n1841175 - When an Install Plan gets deleted, OLM does not create a new one\n1841381 - Issue with memoryMB validation\n1841885 - oc adm catalog mirror command attempts to pull from registry.redhat.io when using --from-dir option\n1844727 - Etcd container leaves grep and lsof zombie processes\n1845387 - CVE-2020-10763 heketi: gluster-block volume password details available in logs\n1847074 - Filter bar layout issues at some screen widths on search page\n1848358 - CRDs with preserveUnknownFields:true don\u0027t reflect in status that they are non-structural\n1849543 - [4.5]kubeletconfig\u0027s description will show multiple lines for finalizers when upgrade from 4.4.8-\u003e4.5\n1851103 - Use of NetworkManager-wait-online.service in rhcos-growpart.service\n1851203 - [GSS] [RFE] Need a simpler representation of capactiy breakdown in total usage and per project breakdown in OCS 4 dashboard\n1851351 - OCP 4.4.9: EtcdMemberIPMigratorDegraded: rpc error: code = Canceled desc = grpc: the client connection is closing\n1851693 - The `oc apply` should return errors instead of hanging there when failing to create the CRD\n1852289 - Upgrade testsuite fails on ppc64le environment - Unsupported LoadBalancer service\n1853115 - the restriction of --cloud option should be shown in help text. \n1853116 - `--to` option does not work with `--credentials-requests` flag. \n1853352 - [v2v][UI] Storage Class fields Should  Not be empty  in VM  disks view\n1853652 - CVE-2020-14040 golang.org/x/text: possibility to trigger an infinite loop in encoding/unicode could lead to crash\n1854567 - \"Installed Operators\" list showing \"duplicated\" entries during installation\n1855325 - [Feature:Prometheus][Conformance] Prometheus when installed on the cluster [Top Level] [Feature:Prometheus][Conformance] Prometheus when installed on the cluster should report telemetry if a cloud.openshift.com token is present\n1855351 - Inconsistent Installer reactions to Ctrl-C during user input process\n1855408 - OVN cluster unstable after running minimal scale test\n1856351 - Build page should show metrics for when the build ran, not the last 30 minutes\n1856354 - New APIServices missing from OpenAPI definitions\n1857446 - ARO/Azure: excessive pod memory allocation causes node lockup\n1857877 - Operator upgrades can delete existing CSV before completion\n1858578 - [v2v] [ui] VM import RHV to CNV Target VM Name longer than 63 chars should not be allowed\n1859174 - [IPI][OSP] Having errors from 4.3 to 4.6 about Security group rule already created\n1860136 - default ingress does not propagate annotations to route object on update\n1860322 - [OCPv4.5.2] after unexpected shutdown one of RHV Hypervisors, OCP worker nodes machine are marked as \"Failed\"\n1860518 - unable to stop a crio pod\n1861383 - Route with `haproxy.router.openshift.io/timeout: 365d` kills the ingress controller\n1862430 - LSO: PV creation lock should not be acquired in a loop\n1862489 - LSO autoprovisioning should exclude top level disks that are part of LVM volume group. \n1862608 - Virtual media does not work on hosts using BIOS, only UEFI\n1862918 - [v2v] User should only select SRIOV network when importin vm with SRIOV network\n1865743 - Some pods are stuck in ContainerCreating and some sdn pods are in CrashLoopBackOff\n1865839 - rpm-ostree fails with \"System transaction in progress\" when moving to kernel-rt\n1866043 - Configurable table column headers can be illegible\n1866087 - Examining agones helm chart resources results in \"Oh no!\"\n1866261 - Need to indicate the intentional behavior for Ansible in the `create api` help info\n1866298 - [RHOCS Usability Study][Installation] Labeling the namespace should be a part of the installation flow or be clearer as a requirement\n1866320 - [RHOCS Usability Study][Dashboard] Users were confused by Available Capacity and the Total Capacity\n1866334 - [RHOCS Usability Study][Installation] On the Operator installation page, there\u2019s no indication on which labels offer tooltip/help\n1866340 - [RHOCS Usability Study][Dashboard] It was not clear why \u201cNo persistent storage alerts\u201d was prominently displayed\n1866343 - [RHOCS Usability Study][Dashboard] User wanted to know the time frame for Data Consumption, e.g I/O Operations\n1866445 - kola --basic-qemu-scenarios scenario fail on ppc64le \u0026 s390x\n1866482 - Few errors are seen when oc adm must-gather is run\n1866605 - No metadata.generation set for build and buildconfig objects\n1866873 - MCDDrainError \"Drain failed on  , updates may be blocked\" missing rendered node name\n1866901 - Deployment strategy for BMO allows multiple pods to run at the same time\n1866925 - openshift-install destroy cluster should fail quickly when provided with invalid credentials on Azure. \n1867165 - Cannot assign static address to baremetal install bootstrap vm\n1867380 - When using webhooks in OCP 4.5 fails to rollout latest deploymentconfig\n1867400 - [OCs 4.5]UI should not allow creation of second storagecluster of different mode in a single OCS\n1867477 - HPA monitoring cpu utilization fails for deployments which have init containers\n1867518 - [oc] oc should not print so many goroutines when ANY command fails\n1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on  250 node cluster\n1867965 - OpenShift Console Deployment Edit overwrites deployment yaml\n1868004 - opm index add appears to produce image with wrong registry server binary\n1868065 - oc -o jsonpath prints possible warning / bug \"Unable to decode server response into a Table\"\n1868104 - Baremetal actuator should not delete Machine objects\n1868125 - opm index add is not creating an index with valid images when --permissive flag is added, the index is empty instead\n1868384 - CLI does not save login credentials as expected when using the same username in multiple clusters\n1868527 - OpenShift Storage using VMWare vSAN receives error \"Failed to add disk \u0027scsi0:2\u0027\" when mounted pod is created on separate node\n1868645 - After a disaster recovery pods a stuck in \"NodeAffinity\" state and not running\n1868748 - ClusterProvisioningIP in baremetal platform has wrong JSON annotation\n1868765 - [vsphere][ci] could not reserve an IP address: no available addresses\n1868770 - catalogSource named \"redhat-operators\" deleted in a disconnected cluster\n1868976 - Prometheus error opening query log file on EBS backed PVC\n1869293 - The configmap name looks confusing in aide-ds pod logs\n1869606 - crio\u0027s failing to delete a network namespace\n1870337 - [sig-storage] Managed cluster should have no crashlooping recycler pods over four minutes\n1870342 - [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]\n1870373 - Ingress Operator reports available when DNS fails to provision\n1870467 - D/DC Part of Helm / Operator Backed should not have HPA\n1870728 - openshift-install creates expired ignition files from stale .openshift_install_state.json\n1870800 - [4.6] Managed Column not appearing on Pods Details page\n1871170 - e2e tests are needed to validate the functionality of the etcdctl container\n1872001 - EtcdDiscoveryDomain no longer needed\n1872095 - content are expanded to the whole line when only one column in table on Resource Details page\n1872124 - Could not choose device type as \"disk\" or \"part\" when create localvolumeset from web console\n1872128 - Can\u0027t run container with hostPort on ipv6 cluster\n1872166 - \u0027Silences\u0027 link redirects to unexpected \u0027Alerts\u0027 view after creating a silence in the Developer perspective\n1872251 - [aws-ebs-csi-driver] Verify job in CI doesn\u0027t check for vendor dir sanity\n1872786 - Rules in kube-apiserver.rules are taking too long and consuming too much memory for Prometheus to evaluate them\n1872821 - [DOC] Typo in Ansible Operator Tutorial\n1872907 - Fail to create CR from generated Helm Base Operator\n1872923 - Click \"Cancel\" button on the \"initialization-resource\" creation form page should send users to the \"Operator details\" page instead of \"Install Operator\" page (previous page)\n1873007 - [downstream] failed to read config when running the operator-sdk in the home path\n1873030 - Subscriptions without any candidate operators should cause resolution to fail\n1873043 - Bump to latest available 1.19.x k8s\n1873114 - Nodes goes into NotReady state (VMware)\n1873288 - Changing Cluster-Wide Pull Secret Does Not Trigger Updates In Kubelet Filesystem\n1873305 - Failed to power on /inspect node when using Redfish protocol\n1873326 - Accessibility - The symbols e.g checkmark in the overview page has no text description, label, or other accessible information\n1873480 - Accessibility - No text description, alt text, label, or other accessible information associated with the help icon: \u201c?\u201d button/icon in Developer Console -\u003eNavigation\n1873556 - [Openstack] HTTP_PROXY setting for NetworkManager-resolv-prepender not working\n1873593 - MCO fails to cope with ContainerRuntimeConfig thas has a name \u003e 63 characters\n1874057 - Pod stuck in CreateContainerError - error msg=\"container_linux.go:348: starting container process caused \\\"chdir to cwd (\\\\\\\"/mount-point\\\\\\\") set in config.json failed: permission denied\\\"\"\n1874074 - [CNV] Windows 2019 Default Template Not Defaulting to Proper NIC/Storage Driver\n1874192 - [RFE] \"Create Backing Store\" page doesn\u0027t allow to select already defined k8s secret as target bucket credentials when Google Cloud Storage is selected as a provider\n1874240 - [vsphere] unable to deprovision - Runtime error list attached objects\n1874248 - Include validation for vcenter host in the install-config\n1874340 - vmware: NodeClockNotSynchronising alert is triggered in openshift cluster after upgrading form 4.4.16 to 4.5.6\n1874583 - apiserver tries and fails to log an event when shutting down\n1874584 - add retry for etcd errors in kube-apiserver\n1874638 - Missing logging for nbctl daemon\n1874736 - [downstream] no version info for the helm-operator\n1874901 - add utm_source parameter to Red Hat Marketplace URLs for attribution\n1874968 - Accessibility: The project selection drop down is a keyboard trap\n1875247 - Dependency resolution error \"found more than one head for channel\" is unhelpful for users\n1875516 - disabled scheduling is easy to miss in node page of OCP console\n1875598 - machine status is Running for a master node which has been terminated from the console\n1875806 - When creating a service of type \"LoadBalancer\" (Kuryr,OVN) communication through this loadbalancer failes after 2-5 minutes. \n1876166 - need to be able to disable kube-apiserver connectivity checks\n1876469 - Invalid doc link on yaml template schema description\n1876701 - podCount specDescriptor change doesn\u0027t take effect on operand details page\n1876815 - Installer uses the environment variable OS_CLOUD for manifest generation despite explicit prompt\n1876935 - AWS volume snapshot is not deleted after the cluster is destroyed\n1877071 - vSphere IPI - Nameserver limits were exceeded, some nameservers have been omitted\n1877105 - add redfish to enabled_bios_interfaces\n1877116 - e2e aws calico tests fail with `rpc error: code = ResourceExhausted`\n1877273 - [OVN] EgressIP cannot fail over to available nodes after one egressIP node shutdown\n1877648 - [sriov]VF from allocatable and capacity of node is incorrect when the policy is only \u0027rootDevices\u0027\n1877681 - Manually created PV can not be used\n1877693 - dnsrecords specify recordTTL as 30 but the value is null in AWS Route 53\n1877740 - RHCOS unable to get ip address during first boot\n1877812 - [ROKS] IBM cloud failed to terminate OSDs when upgraded between internal builds of OCS 4.5\n1877919 - panic in multus-admission-controller\n1877924 - Cannot set BIOS config using Redfish with Dell iDracs\n1878022 - Met imagestreamimport error when import the whole image repository\n1878086 - OCP 4.6+OCS 4.6(multiple SC) Internal Mode- UI should populate the default \"Filesystem Name\" instead of providing a textbox, \u0026 the name should be validated\n1878301 - [4.6] [UI] Unschedulable used to always be displayed when Node is Ready status\n1878701 - After deleting and recreating a VM with same name, the VM events contain the events from the old VM\n1878766 - CPU consumption on nodes is higher than the CPU count of the node. \n1878772 - On the nodes there are up to 547 zombie processes caused by thanos and Prometheus. \n1878823 - \"oc adm release mirror\" generating incomplete imageContentSources when using \"--to\" and \"--to-release-image\"\n1878845 - 4.5 to 4.6.rc.4 upgrade failure: authentication operator health check connection refused for multitenant mode\n1878900 - Installer complains about not enough vcpu for the baremetal flavor where generic bm flavor is being used\n1878953 - RBAC error shows when normal user access pvc upload page\n1878956 - `oc api-resources` does not include API version\n1878972 - oc adm release mirror removes the architecture information\n1879013 - [RFE]Improve CD-ROM interface selection\n1879056 - UI should allow to change or unset the evictionStrategy\n1879057 - [CSI Certificate Test] Test failed for CSI certification tests for CSIdriver openshift-storage.rbd.csi.ceph.com with RWX enabled\n1879094 - RHCOS dhcp kernel parameters not working as expected\n1879099 - Extra reboot during 4.5 -\u003e 4.6 upgrade\n1879244 - Error adding container to network \"ipvlan-host-local\": \"master\" field is required\n1879248 - OLM Cert Dir for Webhooks does not align SDK/Kubebuilder\n1879282 - Update OLM references to point to the OLM\u0027s new doc site\n1879283 - panic after nil pointer dereference in pkg/daemon/update.go\n1879365 - Overlapping, divergent openshift-cluster-storage-operator manifests\n1879419 - [RFE]Improve boot source description for \u0027Container\u0027 and \u2018URL\u2019\n1879430 - openshift-object-counts quota is not dynamically updating as the resource is deleted. \n1879565 - IPv6 installation fails on node-valid-hostname\n1879777 - Overlapping, divergent openshift-machine-api namespace manifests\n1879878 - Messages flooded in thanos-querier pod- oauth-proxy container: Authorization header does not start with \u0027Basic\u0027, skipping basic authentication in Log message in thanos-querier pod the oauth-proxy\n1879930 - Annotations shouldn\u0027t be removed during object reconciliation\n1879976 - No other channel visible from console\n1880068 - image pruner is not aware of image policy annotation, StatefulSets, etc. \n1880148 - dns daemonset rolls out slowly in large clusters\n1880161 - Actuator Update calls should have fixed retry time\n1880259 - additional network + OVN network installation failed\n1880389 - Pipeline Runs with skipped Tasks incorrectly show Tasks as \"Failed\"\n1880410 - Convert Pipeline Visualization node to SVG\n1880417 - [vmware] Fail to boot with Secure Boot enabled, kernel lockdown denies iopl access to afterburn\n1880443 - broken machine pool management on OpenStack\n1880450 - Host failed to install because its installation stage joined took longer than expected 20m0s. \n1880473 - IBM Cloudpak operators installation stuck \"UpgradePending\" with InstallPlan status updates failing due to size limitation\n1880680 - [4.3] [Tigera plugin] - openshift-kube-proxy fails - Failed to execute iptables-restore: exit status 4 (iptables-restore v1.8.4 (nf_tables)\n1880785 - CredentialsRequest missing description in `oc explain`\n1880787 - No description for Provisioning CRD for `oc explain`\n1880902 - need dnsPlocy set in crd ingresscontrollers\n1880913 - [DeScheduler] - change loglevel from Info to Error when priority class given in the descheduler params is not present in the cluster\n1881027 - Cluster installation fails at with error :  the container name \\\"assisted-installer\\\" is already in use\n1881046 - [OSP] openstack-cinder-csi-driver-operator doesn\u0027t contain required manifests and assets\n1881155 - operator install authentication: Authentication require functional ingress which requires at least one schedulable and ready node\n1881268 - Image uploading failed but wizard claim the source is available\n1881322 - kube-scheduler not scheduling pods for certificates not renewed automatically after nodes restoration\n1881347 - [v2v][ui]VM Import Wizard does not call Import provider cleanup\n1881881 - unable to specify target port manually resulting in application not reachable\n1881898 - misalignment of sub-title in quick start headers\n1882022 - [vsphere][ipi] directory path is incomplete, terraform can\u0027t find the cluster\n1882057 - Not able to select access modes for snapshot and clone\n1882140 - No description for spec.kubeletConfig\n1882176 - Master recovery instructions don\u0027t handle IP change well\n1882191 - Installation fails against external resources which lack DNS Subject Alternative Name\n1882209 - [ BateMetal IPI ] local coredns resolution not working\n1882210 - [release 4.7] insights-operator: Fix bug in reflector not recovering from \"Too large resource version\"\n1882268 - [e2e][automation]Add Integration Test for Snapshots\n1882361 - Retrieve and expose the latest report for the cluster\n1882485 - dns-node-resolver corrupts /etc/hosts if internal registry is not in use\n1882556 - git:// protocol in origin tests is not currently proxied\n1882569 - CNO: Replacing masters doesn\u0027t work for ovn-kubernetes 4.4\n1882608 - Spot instance not getting created on AzureGovCloud\n1882630 - Fstype is changed after deleting pv provisioned by localvolumeset instance\n1882649 - IPI installer labels all images it uploads into glance as qcow2\n1882653 - The Approval should display the Manual after the APPROVAL changed to Manual from the Automatic\n1882658 - [RFE] Volume Snapshot is not listed under inventory in Project Details page\n1882660 - Operators in a namespace should be installed together when approve one\n1882667 - [ovn] br-ex Link not found when scale up RHEL worker\n1882723 - [vsphere]Suggested mimimum value for providerspec not working\n1882730 - z systems not reporting correct core count in recording rule\n1882750 - [sig-api-machinery][Feature:APIServer][Late] kubelet terminates kube-apiserver gracefully\n1882781 - nameserver= option to dracut creates extra NM connection profile\n1882785 - Multi-Arch CI Jobs destroy libvirt network but occasionally leave it defined\n1882844 - [IPI on vsphere] Executing \u0027openshift-installer destroy cluster\u0027 leaves installer tag categories in vsphere\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883388 - Bare Metal Hosts Details page doesn\u0027t show Mainitenance and Power On/Off status\n1883422 - operator-sdk cleanup fail after installing operator with \"run bundle\" without installmode and og with ownnamespace\n1883425 - Gather top installplans and their count\n1883502 - Logging is broken due to mix of k8s.io/klog v1 and v2\n1883523 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1883538 - must gather report \"cannot file manila/aws ebs/ovirt csi related namespaces and objects\" error\n1883560 - operator-registry image needs clean up in /tmp\n1883563 - Creating duplicate namespace from create namespace modal breaks the UI\n1883614 - [OCP 4.6] [UI] UI should not describe power cycle as \"graceful\"\n1883642 - [sig-imageregistry][Feature:ImageTriggers][Serial] ImageStream admission TestImageStreamAdmitSpecUpdate\n1883660 - e2e-metal-ipi CI job consistently failing on 4.4\n1883765 - [user workload monitoring] improve latency of Thanos sidecar  when streaming read requests\n1883766 - [e2e][automation] Adjust tests for UI changes\n1883768 - [user workload monitoring] The Prometheus operator should discard invalid TLS configurations\n1883773 - opm alpha bundle build fails on win10 home\n1883790 - revert \"force cert rotation every couple days for development\" in 4.7\n1883803 - node pull secret feature is not working as expected\n1883836 - Jenkins imagestream ubi8 and nodejs12 update\n1883847 - The UI does not show checkbox for enable encryption at rest for OCS\n1883853 - go list -m all does not work\n1883905 - race condition in opm index add --overwrite-latest\n1883946 - Understand why trident CSI pods are getting deleted by OCP\n1884035 - Pods are illegally transitioning back to pending\n1884041 - e2e should provide error info when minimum number of pods aren\u0027t ready in kube-system namespace\n1884131 - oauth-proxy repository should run tests\n1884165 - Repos should be disabled in -firstboot.service before OS extensions are applied\n1884221 - IO becomes unhealthy due to a file change\n1884258 - Node network alerts should work on ratio rather than absolute values\n1884270 - Git clone does not support SCP-style ssh locations\n1884334 - CVO marks an upgrade as failed when an operator takes more than 20 minutes to rollout\n1884435 - vsphere - loopback is randomly not being added to resolver\n1884565 - oauth-proxy crashes on invalid usage\n1884584 - Kuryr controller continuously restarting due to unable to clean up Network Policy\n1884613 - Create Instance of Prometheus from operator returns blank page for non cluster-admin users\n1884628 - ovs-configuration service fails when the external network is configured on a tagged vlan on top of a bond device on a baremetal IPI deployment\n1884629 - Visusally impaired user using screen reader not able to select Admin/Developer console options in drop down menu. \n1884632 - Adding BYOK disk encryption through DES\n1884654 - Utilization of a VMI is not populated\n1884655 - KeyError on self._existing_vifs[port_id]\n1884664 - Operator install page shows \"installing...\" instead of going to install status page\n1884672 - Failed to inspect hardware. Reason: unable to start inspection: \u0027idrac\u0027\n1884691 - Installer blocks cloud-credential-operator manual mode on GCP and Azure\n1884724 - Quick Start: Serverless quickstart doesn\u0027t match Operator install steps\n1884739 - Node process segfaulted\n1884824 - Update baremetal-operator libraries to k8s 1.19\n1885002 - network kube-rbac-proxy scripts crashloop rather than non-crash looping\n1885138 - Wrong detection of pending state in VM details\n1885151 - [Cloud Team - Cluster API Provider Azure] Logging is broken due to mix of k8s.io/klog v1 and v2\n1885165 - NoRunningOvnMaster alert falsely triggered\n1885170 - Nil pointer when verifying images\n1885173 - [e2e][automation] Add test for next run configuration feature\n1885179 - oc image append fails on push (uploading a new layer)\n1885213 - Vertical Pod Autoscaler (VPA) not working with DeploymentConfig\n1885218 - [e2e][automation] Add virtctl to gating script\n1885223 - Sync with upstream (fix panicking cluster-capacity binary)\n1885235 - Prometheus: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885241 - kube-rbac-proxy: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885243 - prometheus-adapter: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885244 - prometheus-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885246 - cluster-monitoring-operator: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885249 - openshift-state-metrics: Logging is broken due to mix of k8s.io/klog v1 and v2\n1885308 - Supermicro nodes failed to boot via disk during installation when using IPMI and UEFI\n1885315 - unit tests fail on slow disks\n1885319 - Remove redundant use of group and kind of DataVolumeTemplate\n1885343 - Console doesn\u0027t load in iOS Safari when using self-signed certificates\n1885344 - 4.7 upgrade - dummy bug for 1880591\n1885358 - add p\u0026f configuration to protect openshift traffic\n1885365 - MCO does not respect the install section of systemd files when enabling\n1885376 - failed to initialize the cluster: Cluster operator marketplace is still updating\n1885398 - CSV with only Webhook conversion can\u0027t be installed\n1885403 - Some OLM events hide the underlying errors\n1885414 - Need to disable HTX when not using HTTP/2 in order to preserve HTTP header name case\n1885425 - opm index add cannot batch add multiple bundles that use skips\n1885543 - node tuning operator builds and installs an unsigned RPM\n1885644 - Panic output due to timeouts in openshift-apiserver\n1885676 - [OCP 4.7]UI should fallback to minimal deployment only after total CPU \u003c 30 || totalMemory \u003c 72 GiB for initial deployment\n1885702 - Cypress:  Fix \u0027aria-hidden-focus\u0027 accesibility violations\n1885706 - Cypress:  Fix \u0027link-name\u0027 accesibility violation\n1885761 - DNS fails to resolve in some pods\n1885856 - Missing registry v1 protocol usage metric on telemetry\n1885864 - Stalld service crashed under the worker node\n1885930 - [release 4.7] Collect ServiceAccount statistics\n1885940 - kuryr/demo image ping not working\n1886007 - upgrade test with service type load balancer will never work\n1886022 - Move range allocations to CRD\u0027s\n1886028 - [BM][IPI] Failed to delete node after scale down\n1886111 - UpdatingopenshiftStateMetricsFailed: DeploymentRollout of openshift-monitoring/openshift-state-metrics: got 1 unavailable replicas\n1886134 - Need to set GODEBUG=x509ignoreCN=0 in initrd\n1886154 - System roles are not present while trying to create new role binding through web console\n1886166 - 1885517 Clone - Not needed for 4.7 - upgrade from 4.5-\u003e4.6 causes broadcast storm\n1886168 - Remove Terminal Option for Windows Nodes\n1886200 - greenwave / CVP is failing on bundle validations, cannot stage push\n1886229 - Multipath support for RHCOS sysroot\n1886294 - Unable to schedule a pod due to Insufficient ephemeral-storage\n1886327 - Attempt to add a worker using bad roodDeviceHint: bmh and machine become Provisioned, no error in status\n1886353 - [e2e][automation] kubevirt-gating job fails for a missing virtctl URL\n1886397 - Move object-enum to console-shared\n1886423 - New Affinities don\u0027t contain ID until saving\n1886435 - Azure UPI uses deprecated command \u0027group deployment\u0027\n1886449 - p\u0026f: add configuration to protect oauth server traffic\n1886452 - layout options doesn\u0027t gets selected style on click i.e grey background\n1886462 - IO doesn\u0027t recognize namespaces - 2 resources with the same name in 2 namespaces -\u003e only 1 gets collected\n1886488 - move e2e test off of nfs image from docker.io/gmontero/nfs-server:latest\n1886524 - Change default terminal command for Windows Pods\n1886553 - i/o timeout experienced from build02 when targeting CI test cluster during test execution\n1886600 - panic: assignment to entry in nil map\n1886620 - Application behind service load balancer with PDB is not disrupted\n1886627 - Kube-apiserver pods restarting/reinitializing periodically\n1886635 - CVE-2020-8563 kubernetes: Secret leaks in kube-controller-manager when using vSphere Provider\n1886636 - Panic in machine-config-operator\n1886749 - Removing network policy from namespace causes inability to access pods through loadbalancer. \n1886751 - Gather MachineConfigPools\n1886766 - PVC dropdown has \u0027Persistent Volume\u0027 Label\n1886834 - ovn-cert is mandatory in both master and node daemonsets\n1886848 - [OSP] machine instance-state annotation discrepancy with providerStatus.instanceState\n1886861 - ordered-values.yaml not honored if values.schema.json provided\n1886871 - Neutron ports created for hostNetworking pods\n1886890 - Overwrite jenkins-agent-base imagestream\n1886900 - Cluster-version operator fills logs with \"Manifest: ...\" spew\n1886922 - [sig-network] pods should successfully create sandboxes by getting pod\n1886973 - Local storage operator doesn\u0027t include correctly populate LocalVolumeDiscoveryResult in console\n1886977 - [v2v]Incorrect VM Provider type displayed in UI while importing VMs through VMIO\n1887010 - Imagepruner met error \"Job has reached the specified backoff limit\" which causes image registry degraded\n1887026 - FC volume attach fails with \u201cno fc disk found\u201d error on OCP 4.6 PowerVM cluster\n1887040 - [upgrade] ovs pod crash for rhel worker when upgarde from 4.5 to 4.6\n1887046 - Event for LSO need update to avoid confusion\n1887088 - cluster-node-tuning-operator refers to missing cluster-node-tuned image\n1887375 - User should be able to specify volumeMode when creating pvc from web-console\n1887380 - Unsupported access mode should not be available to select when creating pvc by aws-ebs-csi-driver(gp2-csi) from web-console\n1887392 - openshift-apiserver: delegated authn/z should have ttl \u003e metrics/healthz/readyz/openapi interval\n1887428 - oauth-apiserver service should be monitored by prometheus\n1887441 - ingress misconfiguration may break authentication but ingress operator keeps reporting \"degraded: False\"\n1887454 - [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data\n1887456 - It is impossible to attach the default NIC to a bridge with the latest version of OVN Kubernetes\n1887465 - Deleted project is still referenced\n1887472 - unable to edit application group for KSVC via gestures (shift+Drag)\n1887488 - OCP 4.6:  Topology Manager OpenShift E2E test fails:  gu workload attached to SRIOV networks should let resource-aligned PODs have working SRIOV network interface\n1887509 - Openshift-tests conformance TopologyManager tests run when Machine Config Operator is not installed on cluster\n1887525 - Failures to set master HardwareDetails cannot easily be debugged\n1887545 - 4.5 to 4.6 upgrade fails when external network is configured on a bond device: ovs-configuration service fails and node becomes unreachable\n1887585 - ovn-masters stuck in crashloop after scale test\n1887651 - [Internal Mode] Object gateway (RGW) in unknown state after OCP upgrade. \n1887737 - Test TestImageRegistryRemovedWithImages is failing on e2e-vsphere-operator\n1887740 - cannot install descheduler operator after uninstalling it\n1887745 - API server is throwing 5xx error code for 42.11% of requests for LIST events\n1887750 - `oc explain localvolumediscovery` returns empty description\n1887751 - `oc explain localvolumediscoveryresult` returns empty description\n1887778 - Add ContainerRuntimeConfig gatherer\n1887783 - PVC upload cannot continue after approve the certificate\n1887797 - [CNV][V2V] Default network type is bridge for interface bound to POD network in VMWare migration wizard\n1887799 - User workload monitoring prometheus-config-reloader OOM\n1887850 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install test is flaky\n1887863 - Installer panics on invalid flavor\n1887864 - Clean up dependencies to avoid invalid scan flagging\n1887934 - TestForwardedHeaderPolicyAppend, TestForwardedHeaderPolicyReplace, and TestForwardedHeaderPolicyIfNone consistently fail because of case-sensitive comparison\n1887936 - Kube-scheduler should be able to parse v1beta1 KubeSchedulerConfig\n1888015 - workaround kubelet graceful termination of static pods bug\n1888028 - prevent extra cycle in aggregated apiservers\n1888036 - Operator details shows old CRD versions\n1888041 - non-terminating pods are going from running to pending\n1888072 - Setting Supermicro node to PXE boot via Redfish doesn\u0027t take affect\n1888073 - Operator controller continuously busy looping\n1888118 - Memory requests not specified for image registry operator\n1888150 - Install Operand Form on OperatorHub is displaying unformatted text\n1888172 - PR 209 didn\u0027t update the sample archive, but machineset and pdbs are now namespaced\n1888227 - Failed to deploy some of container image on the recent OCP 4.6 nightly build\n1888292 - Fix CVE-2015-7501 affecting agent-maven-3.5\n1888311 - p\u0026f: make SAR traffic from oauth and openshift apiserver exempt\n1888363 - namespaces crash in dev\n1888378 - [IPI on Azure] errors destroying cluster when Azure resource group was never created\n1888381 - instance:node_network_receive_bytes_excluding_lo:rate1m value twice expected\n1888464 - installer missing permission definitions for TagResources and UntagResources when installing in existing VPC\n1888494 - imagepruner pod is error when image registry storage is not configured\n1888565 - [OSP] machine-config-daemon-firstboot.service failed with \"error reading osImageURL from rpm-ostree\"\n1888595 - cluster-policy-controller logs shows error which reads initial monitor sync has error\n1888601 - The poddisruptionbudgets is using the operator service account, instead of gather\n1888657 - oc doesn\u0027t know its name\n1888663 - sdn starts after kube-apiserver, delay readyz until oauth-apiserver is reachable\n1888671 - Document the Cloud Provider\u0027s ignore-volume-az setting\n1888738 - quay.io/openshift/origin-must-gather:latest is not a multi-arch, manifest-list image\n1888763 - at least one of these parameters (Vendor, DeviceID or PfNames) has to be defined in nicSelector in CR %s\", cr.GetName()\n1888827 - ovnkube-master may segfault when trying to add IPs to a nil address set\n1888861 - need to pass dual-stack service CIDRs to kube-apiserver in dual-stack cluster\n1888866 - AggregatedAPIDown permanently firing after removing APIService\n1888870 - JS error when using autocomplete in YAML editor\n1888874 - hover message are not shown for some properties\n1888900 - align plugins versions\n1888985 - Cypress:  Fix \u0027Ensures buttons have discernible text\u0027 accesibility violation\n1889213 - The error message of uploading failure is not clear enough\n1889267 - Increase the time out for creating template and upload image in the terraform\n1889348 - Project link should be removed from Application Details page, since it is inaccurate (Application Stages)\n1889374 - Kiali feature won\u0027t work on fresh 4.6 cluster\n1889388 - ListBundles returns incorrect replaces/skips when bundles have been added via semver-skippatch mode\n1889420 - OCP failed to add vsphere disk when pod moved to new node during cluster upgrade\n1889515 - Accessibility - The symbols e.g checkmark in the Node \u003e overview page has no text description, label, or other accessible information\n1889529 - [Init-CR annotation] Inline alert shows operand instance was needed still appearing after creating an Operand instance\n1889540 - [4.5 upgrade][alert]CloudCredentialOperatorDown\n1889577 - Resources are not shown on project workloads page\n1889620 - [Azure] - Machineset not scaling when publicIP:true in disconnected Azure enviroment\n1889630 - Scheduling disabled popovers are missing for Node status in Node Overview and Details pages\n1889692 - Selected Capacity is showing wrong size\n1889694 - usbguard fails to install as RHCOS extension due to missing libprotobuf.so.15\n1889698 - When the user clicked cancel at the Create Storage Class confirmation dialog all the data from the Local volume set goes off\n1889710 - Prometheus metrics on disk take more space compared to OCP 4.5\n1889721 - opm index add semver-skippatch mode does not respect prerelease versions\n1889724 - When LocalVolumeDiscovery CR is created form the LSO page User doesn\u0027t see the Disk tab\n1889767 - [vsphere] Remove certificate from upi-installer image\n1889779 - error when destroying a vSphere installation that failed early\n1889787 - OCP is flooding the oVirt engine with auth errors\n1889838 - race in Operator update after fix from bz1888073\n1889852 - support new AWS regions ap-east-1, af-south-1, eu-south-1\n1889863 - Router prints incorrect log message for namespace label selector\n1889891 - Backport timecache LRU fix\n1889912 - Drains can cause high CPU usage\n1889921 - Reported Degraded=False Available=False pair does not make sense\n1889928 - [e2e][automation] Add more tests for golden os\n1889943 - EgressNetworkPolicy does not work when setting Allow rule to a dnsName\n1890038 - Infrastructure status.platform not migrated to status.platformStatus causes warnings\n1890074 - MCO extension kernel-headers is invalid\n1890104 - with Serverless 1.10 version of trigger/subscription/channel/IMC is V1 as latest\n1890130 - multitenant mode consistently fails CI\n1890141 - move off docker.io images for build/image-eco/templates/jenkins e2e\n1890145 - The mismatched of font size for Status Ready and Health Check secondary text\n1890180 - FieldDependency x-descriptor doesn\u0027t support non-sibling fields\n1890182 - DaemonSet with existing owner garbage collected\n1890228 - AWS: destroy stuck on route53 hosted zone not found\n1890235 - e2e: update Protractor\u0027s checkErrors logging\n1890250 - workers may fail to join the cluster during an update from 4.5\n1890256 - Replacing a master node on a baremetal IPI deployment gets stuck when deleting the machine of the unhealthy member\n1890270 - External IP doesn\u0027t work if the IP address is not assigned to a node\n1890361 - s390x: Generate new ostree rpm with fix for rootfs immutability\n1890456 - [vsphere] mapi_instance_create_failed doesn\u0027t work on vsphere\n1890467 - unable to edit an application without a service\n1890472 - [Kuryr] Bulk port creation exception not completely formatted\n1890494 - Error assigning Egress IP on GCP\n1890530 - cluster-policy-controller doesn\u0027t gracefully terminate\n1890630 - [Kuryr] Available port count not correctly calculated for alerts\n1890671 - [SA] verify-image-signature using service account does not work\n1890677 - \u0027oc image info\u0027 claims \u0027does not exist\u0027 for application/vnd.oci.image.manifest.v1+json manifest\n1890808 - New etcd alerts need to be added to the monitoring stack\n1890951 - Mirror of multiarch images together with cluster logging case problems. It doesn\u0027t sync the \"overall\" sha it syncs only the sub arch sha. \n1890984 - Rename operator-webhook-config to sriov-operator-webhook-config\n1890995 - wew-app should provide more insight into why image deployment failed\n1891023 - ovn-kubernetes rbac proxy never starts waiting for an incorrect API call\n1891047 - Helm chart fails to install using developer console because of TLS certificate error\n1891068 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured [Early] failing due to TargetDown alert from kube-scheduler\n1891080 - [LSO] When Localvolumeset and SC is already created before OCS install Creation of LVD and LVS is skipped when user click created storage cluster from UI\n1891108 - p\u0026f: Increase the concurrency share of workload-low priority level\n1891143 - CVO deadlocked while shutting down, shortly after fresh cluster install (metrics goroutine)\n1891189 - [LSO] max device limit is accepting negative values. PVC is not getting created and no error is shown\n1891314 - Display incompatible helm charts for installation (kubeVersion of cluster doesn\u0027t meet requirements of chart)\n1891362 - Wrong metrics count for openshift_build_result_total\n1891368 - fync should be fsync for etcdHighFsyncDurations alert\u0027s annotations.message\n1891374 - fync should be fsync for etcdHighFsyncDurations critical alert\u0027s annotations.message\n1891376 - Extra text in Cluster Utilization charts\n1891419 - Wrong detail head on network policy detail page. \n1891459 - Snapshot tests should report stderr of failed commands\n1891498 - Other machine config pools do not show during update\n1891543 - OpenShift 4.6/OSP install fails when node flavor has less than 25GB, even with dedicated storage\n1891551 - Clusterautoscaler doesn\u0027t scale up as expected\n1891552 - Handle missing labels as empty. \n1891555 - The windows oc.exe binary does not have version metadata\n1891559 - kuryr-cni cannot start new thread\n1891614 - [mlx] testpmd fails inside OpenShift pod using DevX version 19.11\n1891625 - [Release 4.7] Mutable LoadBalancer Scope\n1891702 - installer get pending when additionalTrustBundle is added into  install-config.yaml\n1891716 - OVN cluster upgrade from 4.6.1 to 4.7 fails\n1891740 - OperatorStatusChanged is noisy\n1891758 - the authentication operator may spam DeploymentUpdated event endlessly\n1891759 - Dockerfile builds cannot change /etc/pki/ca-trust\n1891816 - [UPI] [OSP] control-plane.yml provisioning playbook fails on OSP 16.1\n1891825 - Error message not very informative in case of mode mismatch\n1891898 - The ClusterServiceVersion can define Webhooks that cannot be created. \n1891951 - UI should show warning while creating pools with compression on\n1891952 - [Release 4.7] Apps Domain Enhancement\n1891993 - 4.5 to 4.6 upgrade doesn\u0027t remove deployments created by marketplace\n1891995 - OperatorHub displaying old content\n1891999 - Storage efficiency card showing wrong compression ratio\n1892004 - OCP 4.6 opm on Ubuntu 18.04.4 - error /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28\u0027 not found (required by ./opm)\n1892167 - [SR-IOV] SriovNetworkNodePolicies apply ignoring the spec.nodeSelector. \n1892198 - TypeError in \u0027Performance Profile\u0027 tab displayed for \u0027Performance Addon Operator\u0027\n1892288 - assisted install workflow creates excessive control-plane disruption\n1892338 - HAProxyReloadFail alert only briefly fires in the event of a broken HAProxy config\n1892358 - [e2e][automation] update feature gate for kubevirt-gating job\n1892376 - Deleted netnamespace could not be re-created\n1892390 - TestOverwrite/OverwriteBundle/DefaultBehavior in operator-registry is flaky\n1892393 - TestListPackages is flaky\n1892448 - MCDPivotError alert/metric missing\n1892457 - NTO-shipped stalld needs to use FIFO for boosting. \n1892467 - linuxptp-daemon crash\n1892521 - [AWS] Startup bootstrap machine failed due to ignition file is missing in disconnected UPI env\n1892653 - User is unable to create KafkaSource with v1beta\n1892724 - VFS added to the list of devices of the nodeptpdevice CRD\n1892799 - Mounting additionalTrustBundle in the operator\n1893117 - Maintenance mode on vSphere blocks installation. \n1893351 - TLS secrets are not able to edit on console. \n1893362 - The ovs-xxxxx_openshift-sdn container does not terminate gracefully, slowing down reboots\n1893386 - false-positive ReadyIngressNodes_NoReadyIngressNodes: Auth operator makes risky \"worker\" assumption when guessing about ingress availability\n1893546 - Deploy using virtual media fails on node cleaning step\n1893601 - overview filesystem utilization of OCP is showing the wrong values\n1893645 - oc describe route SIGSEGV\n1893648 - Ironic image building process is not compatible with UEFI secure boot\n1893724 - OperatorHub generates incorrect RBAC\n1893739 - Force deletion doesn\u0027t work for snapshots if snapshotclass is already deleted\n1893776 - No useful metrics for image pull time available, making debugging issues there impossible\n1893798 - Lots of error messages starting with \"get namespace to enqueue Alertmanager instances failed\" in the logs of prometheus-operator\n1893832 - ErrorCount field is missing in baremetalhosts.metal3.io CRD\n1893889 - disabled dropdown items in the pf dropdown component are skipped over and unannounced by JAWS\n1893926 - Some \"Dynamic PV (block volmode)\" pattern storage e2e tests are wrongly skipped\n1893944 - Wrong product name for Multicloud Object Gateway\n1893953 - (release-4.7) Gather default StatefulSet configs\n1893956 - Installation always fails at \"failed to initialize the cluster: Cluster operator image-registry is still updating\"\n1893963 - [Testday] Workloads-\u003e Virtualization is not loading for Firefox browser\n1893972 - Should skip e2e test cases as early as possible\n1894013 - [v2v][Testday] VMware to CNV VM import]VMware URL: It is not clear that only the FQDN/IP address is required without \u0027https://\u0027\n1894020 - User with edit users cannot deploy images from their own namespace from the developer perspective\n1894025 - OCP 4.5 to 4.6 upgrade for \"aws-ebs-csi-driver-operator\" fails when \"defaultNodeSelector\" is set\n1894041 - [v2v][[Testday]VM import from VMware/RHV] VM import wizard: The target storage class name is not displayed if default storage class is used. \n1894065 - tag new packages to enable TLS support\n1894110 - Console shows wrong value for maxUnavailable and maxSurge when set to 0\n1894144 - CI runs of baremetal IPI are failing due to newer libvirt libraries\n1894146 - ironic-api used by metal3 is over provisioned and consumes a lot of RAM\n1894194 - KuryrPorts leftovers from 4.6 GA need to be deleted\n1894210 - Failed to encrypt OSDs on OCS4.6 installation (via UI)\n1894216 - Improve OpenShift Web Console availability\n1894275 - Fix CRO owners file to reflect node owner\n1894278 - \"database is locked\" error when adding bundle to index image\n1894330 - upgrade channels needs to be updated for 4.7\n1894342 - oauth-apiserver logs many \"[SHOULD NOT HAPPEN] failed to update managedFields for ... OAuthClient ... no corresponding type for oauth.openshift.io/v1, Kind=OAuthClient\"\n1894374 - Dont prevent the user from uploading a file with incorrect extension\n1894432 - [oVirt] sometimes installer timeout on tmp_import_vm\n1894477 - bash syntax error in nodeip-configuration.service\n1894503 - add automated test for Polarion CNV-5045\n1894519 - [OSP] External mode cluster creation disabled for Openstack and oVirt platform\n1894539 - [on-prem] Unable to deploy additional machinesets on separate subnets\n1894645 - Cinder volume provisioning crashes on nil cloud provider\n1894677 - image-pruner job is panicking: klog stack\n1894810 - Remove TechPreview Badge from Eventing in Serverless version 1.11.0\n1894860 - \u0027backend\u0027 CI job passing despite failing tests\n1894910 - Update the node to use the real-time kernel fails\n1894992 - All nightly jobs for e2e-metal-ipi failing due to ipa image missing tenacity package\n1895065 - Schema / Samples / Snippets Tabs are all selected at the same time\n1895099 - vsphere-upi and vsphere-upi-serial jobs time out waiting for bootstrap to complete in CI\n1895141 - panic in service-ca injector\n1895147 - Remove memory limits on openshift-dns\n1895169 - VM Template does not properly manage Mount Windows guest tools check box during VM creation\n1895268 - The bundleAPIs should NOT be empty\n1895309 - [OCP v47] The RHEL node scaleup fails due to \"No package matching \u0027cri-o-1.19.*\u0027 found available\" on OCP 4.7 cluster\n1895329 - The infra index filled with warnings \"WARNING: kubernetes.io/cinder built-in volume provider is now deprecated. The Cinder volume provider is deprecated and will be removed in a future release\"\n1895360 - Machine Config Daemon removes a file although its defined in the dropin\n1895367 - Missing image in metadata DB index.db in disconnected Operator Hub installation. OCP 4.6.1\n1895372 - Web console going blank after selecting any operator to install from OperatorHub\n1895385 - Revert KUBELET_LOG_LEVEL back to level 3\n1895423 - unable to edit an application with a custom builder image\n1895430 - unable to edit custom template application\n1895509 - Backup taken on one master cannot be restored on other masters\n1895537 - [sig-imageregistry][Feature:ImageExtract] Image extract should extract content from an image\n1895838 - oc explain description contains \u0027/\u0027\n1895908 - \"virtio\" option is not available when modifying a CD-ROM to disk type\n1895909 - e2e-metal-ipi-ovn-dualstack is failing\n1895919 - NTO fails to load kernel modules\n1895959 - configuring webhook token authentication should prevent cluster upgrades\n1895979 - Unable to get coreos-installer with --copy-network to work\n1896101 - [cnv][automation] Added negative tests for migration from VMWare and RHV\n1896160 - CI: Some cluster operators are not ready: marketplace (missing: Degraded)\n1896188 - [sig-cli] oc debug deployment configs from a build: local-busybox-1-build not completed\n1896218 - Occasional GCP install failures: Error setting IAM policy for project ...: googleapi: Error 400: Service account ... does not exist., badRequest\n1896229 - Current Rate of Bytes Received and Current Rate of Bytes Transmitted data can not be loaded\n1896244 - Found a panic in storage e2e test\n1896296 - Git links should avoid .git as part of the URL and should not link git:// urls in general\n1896302 - [e2e][automation] Fix 4.6 test failures\n1896365 - [Migration]The SDN migration cannot revert under some conditions\n1896384 - [ovirt IPI]: local coredns resolution not working\n1896446 - Git clone from private repository fails after upgrade OCP 4.5 to 4.6\n1896529 - Incorrect instructions in the Serverless operator and application quick starts\n1896645 - documentationBaseURL needs to be updated for 4.7\n1896697 - [Descheduler] policy.yaml param in cluster configmap is empty\n1896704 - Machine API components should honour cluster wide proxy settings\n1896732 - \"Attach to Virtual Machine OS\" button should not be visible on old clusters\n1896866 - File /etc/NetworkManager/system-connections/default_connection.nmconnection  is incompatible with SR-IOV operator\n1896898 - ovs-configuration.service fails when multiple IPv6 default routes are provided via RAs over the same interface and deployment bootstrap fails\n1896918 - start creating new-style Secrets for AWS\n1896923 - DNS pod /metrics exposed on anonymous http port\n1896977 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1897003 - VNC console cannot be connected after visit it in new window\n1897008 - Cypress: reenable check for \u0027aria-hidden-focus\u0027 rule \u0026 checkA11y test for modals\n1897026 - [Migration] With updating optional network operator configuration, migration stucks on MCO\n1897039 - router pod keeps printing log: template \"msg\"=\"router reloaded\"  \"output\"=\"[WARNING] 316/065823 (15) : parsing [/var/lib/haproxy/conf/haproxy.config:52]: option \u0027http-use-htx\u0027 is deprecated and ignored\n1897050 - [IBM Power] LocalVolumeSet provisions boot partition as PV. \n1897073 - [OCP 4.5] wrong netid assigned to Openshift projects/namespaces\n1897138 - oVirt provider uses depricated cluster-api project\n1897142 - When scaling replicas to zero, Octavia loadbalancer pool members are not updated accordingly\n1897252 - Firing alerts are not showing up in console UI after cluster is up for some time\n1897354 - Operator installation showing success, but Provided APIs are missing\n1897361 - The MCO GCP-OP tests fail consistently on containerruntime tests with \"connection refused\"\n1897412 - [sriov]disableDrain did not be updated in CRD of manifest\n1897423 - Max unavailable and Max surge value are not shown on Deployment Config Details page\n1897516 - Baremetal IPI deployment with IPv6 control plane fails when the nodes obtain both SLAAC and DHCPv6 addresses as they set their hostname to \u0027localhost\u0027\n1897520 - After restarting nodes the image-registry co is in degraded true state. \n1897584 - Add casc plugins\n1897603 - Cinder volume attachment detection failure in Kubelet\n1897604 - Machine API deployment fails: Kube-Controller-Manager can\u0027t reach API: \"Unauthorized\"\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1897641 - Baremetal IPI with IPv6 control plane: nodes respond with duplicate packets to ICMP6 echo requests\n1897676 - [CI] [Azure] [UPI] CI failing since 4.6 changes in ignition\n1897830 - [GSS] Unable to deploy OCS 4.5.2 on OCP 4.6.1, cannot `Create OCS Cluster Service`\n1897891 - [RFE][v2v][UI][CNV VM import] Providing error message or/and block migration when vddk-init-image is missing\n1897897 - ptp lose sync openshift 4.6\n1898036 - no network after reboot (IPI)\n1898045 - AWS EBS CSI Driver can not get updated cloud credential secret automatically\n1898097 - mDNS floods the baremetal network\n1898118 - Lack of logs on some image stream tests make hard to find root cause of a problem\n1898134 - Descheduler logs show absolute values instead of percentage when LowNodeUtilization strategy is applied\n1898159 - kcm operator shall pass --allocate-node-cidrs=false to kcm for ovn-kube and openshift-sdn cluster\n1898174 - [OVN] EgressIP does not guard against node IP assignment\n1898194 - GCP: can\u0027t install on custom machine types\n1898238 - Installer validations allow same floating IP for API and Ingress\n1898268 - [OVN]: `make check` broken on 4.6\n1898289 - E2E test: Use KUBEADM_PASSWORD_FILE by default\n1898320 - Incorrect Apostrophe  Translation of  \"it\u0027s\" in Scheduling Disabled Popover\n1898357 - Within the operatorhub details view, long unbroken text strings do not wrap cause breaking display. \n1898407 - [Deployment timing regression] Deployment takes longer with 4.7\n1898417 - GCP: the dns targets in Google Cloud DNS is not updated after recreating loadbalancer service\n1898487 - [oVirt] Node is not removed when VM has been removed from oVirt engine\n1898500 - Failure to upgrade operator when a Service is included in a Bundle\n1898517 - Ironic auto-discovery may result in rogue nodes registered in ironic\n1898532 - Display names defined in specDescriptors not respected\n1898580 - When adding more than one node selector to the sriovnetworknodepolicy, the cni and the device plugin pods are constantly rebooted\n1898613 - Whereabouts should exclude IPv6 ranges\n1898655 - [oVirt] Node deleted in oVirt should cause the Machine to go into a Failed phase\n1898679 - Operand creation form - Required \"type: object\" properties (Accordion component) are missing red asterisk\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1898745 - installation failing with CVO reporting openshift-samples not rolled out, samples not setting versions in its ClusterOperator\n1898839 - Wrong YAML in operator metadata\n1898851 - Multiple Pods access the same volume on the same node e2e test cases are missed from aws ebs csi driver e2e test job\n1898873 - Remove TechPreview Badge from Monitoring\n1898954 - Backup script does not take /etc/kubernetes/static-pod-resources on a reliable way\n1899111 - [RFE] Update jenkins-maven-agen to maven36\n1899128 - VMI details screen -\u003e show the warning that it is preferable to have a VM only if the VM actually does not exist\n1899175 - bump the RHCOS boot images for 4.7\n1899198 - Use new packages for ipa ramdisks\n1899200 - In Installed Operators page I cannot search for an Operator by it\u0027s name\n1899220 - Support AWS IMDSv2\n1899350 - configure-ovs.sh doesn\u0027t configure bonding options\n1899433 - When Creating OCS from ocs wizard Step Discover Disks shows Error \"An error occurred Not Found\"\n1899459 - Failed to start monitoring pods once the operator removed from override list of CVO\n1899515 - Passthrough credentials are not immediately re-distributed on update\n1899575 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899582 - update discovery burst to reflect lots of CRDs on openshift clusters\n1899588 - Operator objects are re-created after all other associated resources have been deleted\n1899600 - Increased etcd fsync latency as of OCP 4.6\n1899603 - workers-rhel7 CI jobs failing: Failed to remove rollback: error running rpm-ostree cleanup\n1899627 - Project dashboard Active status using small icon\n1899725 - Pods table does not wrap well with quick start sidebar open\n1899746 - [ovn] error while waiting on flows for pod: OVS sandbox port is no longer active (probably due to a subsequent CNI ADD)\n1899760 - etcd_request_duration_seconds_bucket metric has excessive cardinality\n1899835 - catalog-operator repeatedly crashes with \"runtime error: index out of range [0] with length 0\"\n1899839 - thanosRuler.resources.requests does not take effect in user-workload-monitoring-config confimap\n1899853 - additionalSecurityGroupIDs not working for master nodes\n1899922 - NP changes sometimes influence new pods. \n1899949 - [Platform] Remove restriction on disk type selection for LocalVolumeSet\n1900008 - Fix internationalized sentence fragments in ImageSearch.tsx\n1900010 - Fix internationalized sentence fragments in BuildImageSelector.tsx\n1900020 - Remove \u0026apos; from internationalized keys\n1900022 - Search Page - Top labels field is not applied to selected Pipeline resources\n1900030 - disruption_tests: [sig-imageregistry] Image registry remain available failing consistently\n1900126 - Creating a VM results in suggestion to create a default storage class when one already exists\n1900138 - [OCP on RHV] Remove insecure mode from the installer\n1900196 - stalld is not restarted after crash\n1900239 - Skip \"subPath should be able to unmount\" NFS test\n1900322 - metal3 pod\u0027s toleration for key: node-role.kubernetes.io/master currently matches on exact value matches but should match on Exists\n1900377 - [e2e][automation] create new css selector for active users\n1900496 - (release-4.7) Collect spec config for clusteroperator resources\n1900672 - (s390x) Upgrade from old LUKS to new not working with DASD disks\n1900699 - Impossible to add new Node on OCP 4.6 using large ECKD disks - fdasd issue\n1900759 - include qemu-guest-agent by default\n1900790 - Track all resource counts via telemetry\n1900835 - Multus errors when cachefile is not found\n1900935 - `oc adm release mirror` panic panic: runtime error\n1900989 - accessing the route cannot wake up the idled resources\n1901040 - When scaling down the status of the node is stuck on deleting\n1901057 - authentication operator health check failed when installing a cluster behind proxy\n1901107 - pod donut shows incorrect information\n1901111 - Installer dependencies are broken\n1901200 - linuxptp-daemon crash when enable debug log level\n1901301 - CBO should handle platform=BM without provisioning CR\n1901355 - [Azure][4.7] Invalid vm size from customized compute nodes does not fail properly\n1901363 - High Podready Latency due to timed out waiting for annotations\n1901373 - redundant bracket on snapshot restore button\n1901376 - [on-prem] Upgrade from 4.6 to 4.7 failed with \"timed out waiting for the condition during waitForControllerConfigToBeCompleted: controllerconfig is not completed: ControllerConfig has not completed: completed(false) running(false) failing(true\"\n1901395 - \"Edit virtual machine template\" action link should be removed\n1901472 - [OSP] Bootstrap and master nodes use different keepalived unicast setting\n1901517 - RHCOS 4.6.1 uses a single NetworkManager connection for multiple NICs when using default DHCP\n1901531 - Console returns a blank page while trying to create an operator Custom CR with Invalid Schema\n1901594 - Kubernetes resource CRUD operations.Kubernetes resource CRUD operations Pod \"before all\" hook for \"creates the resource instance\"\n1901604 - CNO blocks editing Kuryr options\n1901675 - [sig-network] multicast when using one of the plugins \u0027redhat/openshift-ovs-multitenant, redhat/openshift-ovs-networkpolicy\u0027 should allow multicast traffic in namespaces where it is enabled\n1901909 - The device plugin pods / cni pod are restarted every 5 minutes\n1901982 - [sig-builds][Feature:Builds] build can reference a cluster service  with a build being created from new-build should be able to run a build that references a cluster service\n1902019 - when podTopologySpreadConstraint strategy is enabled for descheduler it throws error\n1902059 - Wire a real signer for service accout issuer\n1902091 - `cluster-image-registry-operator` pod leaves connections open when fails connecting S3 storage\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902157 - The DaemonSet machine-api-termination-handler couldn\u0027t allocate Pod\n1902253 - MHC status doesnt set RemediationsAllowed = 0\n1902299 - Failed to mirror operator catalog - error: destination registry required\n1902545 - Cinder csi driver node pod should add nodeSelector for Linux\n1902546 - Cinder csi driver node pod doesn\u0027t run on master node\n1902547 - Cinder csi driver controller pod doesn\u0027t run on master node\n1902552 - Cinder csi driver does not use the downstream images\n1902595 - Project workloads list view doesn\u0027t show alert icon and hover message\n1902600 - Container csi-snapshotter in Cinder csi driver needs to use ImagePullPolicy=IfNotPresent\n1902601 - Cinder csi driver pods run as BestEffort qosClass\n1902653 - [BM][IPI] Master deployment failed: No valid host was found. Reason: No conductor service registered which supports driver redfish for conductor group\n1902702 - [sig-auth][Feature:LDAP][Serial] ldap group sync can sync groups from ldap: oc cp over non-existing directory/file fails\n1902746 - [BM][IP] Master deployment failed - Base.1.0.GeneralError: database is locked\n1902824 - failed to generate semver informed package manifest: unable to determine default channel\n1902894 - hybrid-overlay-node crashing trying to get node object during initialization\n1902969 - Cannot load vmi detail page\n1902981 - It should default to current namespace when create vm from template\n1902996 - [AWS] UPI on USGov, bootstrap machine can not fetch ignition file  via s3:// URI\n1903033 - duplicated lines of imageContentSources is seen when mirror release image to local registry\n1903034 - OLM continuously printing debug logs\n1903062 - [Cinder csi driver] Deployment mounted volume have no write access\n1903078 - Deleting VolumeSnapshotClass makes VolumeSnapshot not Ready\n1903107 - Enable vsphere-problem-detector e2e tests\n1903164 - OpenShift YAML editor jumps to top every few seconds\n1903165 - Improve Canary Status Condition handling for e2e tests\n1903172 - Column Management: Fix sticky footer on scroll\n1903186 - [Descheduler] cluster logs should report some info when PodTopologySpreadConstraints strategy is enabled\n1903188 - [Descheduler] cluster log reports failed to validate server configuration\" err=\"unsupported log format:\n1903192 - Role name missing on create role binding form\n1903196 - Popover positioning is misaligned for Overview Dashboard status items\n1903206 - Ingress controller incorrectly routes traffic to non-ready pods/backends. \n1903226 - MutatingWebhookConfiguration pod-identity-webhook does not exclude critical control-plane components\n1903248 - Backport Upstream Static Pod UID patch\n1903277 - Deprovisioning Not Deleting Security Groups [VpcLimitExceeded on e2e-aws tests]\n1903290 - Kubelet repeatedly log the same log line from exited containers\n1903346 - PV backed by FC lun is not being unmounted properly and this leads to IO errors / xfs corruption. \n1903382 - Panic when task-graph is canceled with a TaskNode with no tasks\n1903400 - Migrate a VM which is not running goes to pending state\n1903402 - Nic/Disk on VMI overview should link to VMI\u0027s nic/disk page\n1903414 - NodePort is not working when configuring an egress IP address\n1903424 - mapi_machine_phase_transition_seconds_sum doesn\u0027t work\n1903464 - \"Evaluating rule failed\" for \"record: cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\" and \"record: cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"\n1903639 - Hostsubnet gatherer produces wrong output\n1903651 - Network Policies are not working as expected with OVN-Kubernetes when traffic hairpins back to the same source through a service\n1903660 - Cannot install with Assisted Installer on top of IPv6 since network provider is not started\n1903674 - [sig-apps] ReplicationController should serve a basic image on each replica with a private image\n1903717 - Handle different Pod selectors for metal3 Deployment\n1903733 - Scale up followed by scale down can delete all running workers\n1903917 - Failed to load \"Developer Catalog\" page\n1903999 - Httplog response code is always zero\n1904026 - The quota controllers should resync on new resources and make progress\n1904064 - Automated cleaning is disabled by default\n1904124 - DHCP to static lease script doesn\u0027t work correctly if starting with infinite leases\n1904125 - Boostrap VM .ign image gets added into \u0027default\u0027 pool instead of \u003ccluster-name\u003e-\u003cid\u003e-bootstrap\n1904131 - kuryr tempest plugin test test_ipblock_network_policy_sg_rules fails\n1904133 - KubeletConfig flooded with failure conditions\n1904161 - AlertmanagerReceiversNotConfigured fires unconditionally on alertmanager restart\n1904243 - RHCOS 4.6.1 missing ISCSI initiatorname.iscsi !\n1904244 - MissingKey errors for two plugins using i18next.t\n1904262 - clusterresourceoverride-operator has version: 1.0.0 every build\n1904296 - VPA-operator has version: 1.0.0 every build\n1904297 - The index image generated by \"opm index prune\" leaves unrelated images\n1904305 - Should have scroll-down bar for the field which the values list has too many results under dashboards\n1904385 - [oVirt] registry cannot mount volume on 4.6.4 -\u003e 4.6.6 upgrade\n1904497 - vsphere-problem-detector: Run on vSphere cloud only\n1904501 - [Descheduler] descheduler does not evict any pod when PodTopologySpreadConstraint strategy is set\n1904502 - vsphere-problem-detector: allow longer timeouts for some operations\n1904503 - vsphere-problem-detector: emit alerts\n1904538 - [sig-arch][Early] Managed cluster should start all core operators: monitoring: container has runAsNonRoot and image has non-numeric user (nobody)\n1904578 - metric scraping for vsphere problem detector is not configured\n1904582 - All application traffic broken due to unexpected load balancer change on 4.6.4 -\u003e 4.6.6 upgrade\n1904663 - IPI pointer customization MachineConfig always generated\n1904679 - [Feature:ImageInfo] Image info should display information about images\n1904683 - `[sig-builds][Feature:Builds] s2i build with a root user image` tests use docker.io image\n1904684 - [sig-cli] oc debug ensure it works with image streams\n1904713 - Helm charts with kubeVersion restriction are filtered incorrectly\n1904776 - Snapshot modal alert is not pluralized\n1904824 - Set vSphere hostname from guestinfo before NM starts\n1904941 - Insights status is always showing a loading icon\n1904973 - KeyError: \u0027nodeName\u0027 on NP deletion\n1904985 - Prometheus and thanos sidecar targets are down\n1904993 - Many ampersand special characters are found in strings\n1905066 - QE - Monitoring test cases - smoke test suite automation\n1905074 - QE -Gherkin linter to maintain standards\n1905100 - Too many haproxy processes in default-router pod causing high load average\n1905104 - Snapshot modal disk items missing keys\n1905115 - CI: dev-scripts fail on 02_configure_host: Failed to start network ostestbm\n1905119 - Race in AWS EBS determining whether custom CA bundle is used\n1905128 - [e2e][automation] e2e tests succeed without actually execute\n1905133 - operator conditions special-resource-operator\n1905141 - vsphere-problem-detector: report metrics through telemetry\n1905146 - Backend Tests: TestHelmRepoGetter_SkipDisabled failures\n1905194 - Detecting broken connections to the Kube API takes up to 15 minutes\n1905221 - CVO transitions from \"Initializing\" to \"Updating\" despite not attempting many manifests\n1905232 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them failing due to inconsistent images between CI and OCP\n1905253 - Inaccurate text at bottom of Events page\n1905298 - openshift-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905299 - OLM fails to update operator\n1905307 - Provisioning CR is missing from must-gather\n1905319 - cluster-samples-operator containers are not requesting required memory resource\n1905320 - csi-snapshot-webhook is not requesting required memory resource\n1905323 - dns-operator is not requesting required memory resource\n1905324 - ingress-operator is not requesting required memory resource\n1905327 - openshift-kube-scheduler initContainer wait-for-host-port is not requesting required resources: cpu, memory\n1905328 - Changing the bound token service account issuer invalids previously issued bound tokens\n1905329 - openshift-oauth-apiserver initContainer fix-audit-permissions is not requesting required resources: cpu, memory\n1905330 - openshift-monitoring init-textfile is not requesting required resources: cpu, memory\n1905338 - QE -Cypress Automation for Add Flow - Database, Yaml, OperatorBacked, PageDetails\n1905347 - QE - Design Gherkin Scenarios\n1905348 - QE - Design Gherkin Scenarios\n1905362 - [sriov] Error message \u0027Fail to update DaemonSet\u0027 always shown in sriov operator pod\n1905368 - [sriov] net-attach-def generated from sriovnetwork cannot be restored once it was deleted\n1905370 - A-Z/Z-A sorting dropdown on Developer Catalog page is not aligned with filter text input\n1905380 - Default to Red Hat/KubeVirt provider if common template does not have provider annotation\n1905393 - CMO uses rbac.authorization.k8s.io/v1beta1 instead of rbac.authorization.k8s.io/v1\n1905404 - The example of \"Remove the entrypoint on the mysql:latest image\" for `oc image append` does not work\n1905416 - Hyperlink not working from Operator Description\n1905430 - usbguard extension fails to install because of missing correct protobuf dependency version\n1905492 - The stalld service has a higher scheduler priority than ksoftirq and rcu{b, c} threads\n1905502 - Test flake - unable to get https transport for ephemeral-registry\n1905542 - [GSS] The \"External\" mode option is not available when the OCP cluster is deployed using Redhat Cluster Assisted Installer 4.6. \n1905599 - Errant change to lastupdatetime in copied CSV status can trigger runaway csv syncs\n1905610 - Fix typo in export script\n1905621 - Protractor login test fails against a 4.7 (nightly) Power cluster\n1905640 - Subscription manual approval test is flaky\n1905647 - Report physical core valid-for-subscription min/max/cumulative use to telemetry\n1905696 - ClusterMoreUpdatesModal component did not get internationalized\n1905748 - with sharded ingresscontrollers, all shards reload when any endpoint changes\n1905761 - NetworkPolicy with Egress policyType is resulting in SDN errors and improper communication within Project\n1905778 - inconsistent ingresscontroller between fresh installed cluster and upgraded cluster\n1905792 - [OVN]Cannot create egressfirewalll with dnsName\n1905889 - Should create SA for each namespace that the operator scoped\n1905920 - Quickstart exit and restart\n1905941 - Page goes to error after create catalogsource\n1905977 - QE ghaekin design scenaio-pipeline metrics ODC-3711\n1906032 - Canary Controller: Canary daemonset rolls out slowly in large clusters\n1906100 - Disconnected cluster upgrades are failing from the cli, when signature retrieval is being blackholed instead of quickly rejected\n1906105 - CBO annotates an existing Metal3 deployment resource to indicate that it is managing it\n1906118 - OCS feature detection constantly polls storageclusters and storageclasses\n1906120 - \u0027Create Role Binding\u0027 form not setting user or group value when created from a user or group resource\n1906121 - [oc] After new-project creation, the kubeconfig file does not set the project\n1906134 - OLM should not create OperatorConditions for copied CSVs\n1906143 - CBO supports log levels\n1906186 - i18n: Translators are not able to translate `this` without context for alert manager config\n1906228 - tuned and openshift-tuned sometimes do not terminate gracefully, slowing reboots\n1906274 - StorageClass installed by Cinder csi driver operator should enable the allowVolumeExpansion to support volume resize. \n1906276 - `oc image append` can\u0027t work with multi-arch image with  --filter-by-os=\u0027.*\u0027\n1906318 - use proper term for Authorized SSH Keys\n1906335 - The lastTransitionTime, message, reason field of operatorcondition should be optional\n1906356 - Unify Clone PVC boot source flow with URL/Container boot source\n1906397 - IPA has incorrect kernel command line arguments\n1906441 - HorizontalNav and NavBar have invalid keys\n1906448 - Deploy using virtualmedia with provisioning network disabled fails - \u0027Failed to connect to the agent\u0027 in ironic-conductor log\n1906459 - openstack: Quota Validation fails if unlimited quotas are given to a project\n1906496 - [BUG] Thanos having possible memory leak consuming huge amounts of node\u0027s memory and killing them\n1906508 - TestHeaderNameCaseAdjust outputs nil error message on some failures\n1906511 - Root reprovisioning tests flaking often in CI\n1906517 - Validation is not robust enough and may prevent to generate install-confing. \n1906518 - Update snapshot API CRDs to v1\n1906519 - Update LSO CRDs to use v1\n1906570 - Number of disruptions caused by reboots on a cluster cannot be measured\n1906588 - [ci][sig-builds] nodes is forbidden: User \"e2e-test-jenkins-pipeline-xfghs-user\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\n1906650 - Cannot collect network policy, EgressFirewall, egressip logs with gather_network_logs\n1906655 - [SDN]Cannot colloect ovsdb-server.log and ovs-vswitchd.log with gather_network_logs\n1906679 - quick start panel styles are not loaded\n1906683 - Kn resources are not showing in Topology if triggers has KSVC and IMC as subscriber\n1906684 - Event Source creation fails if user selects no app group and switch to yaml and then to form\n1906685 - SinkBinding is shown in topology view if underlying resource along with actual source created\n1906689 - user can pin to nav configmaps and secrets multiple times\n1906691 - Add doc which describes disabling helm chart repository\n1906713 - Quick starts not accesible for a developer user\n1906718 - helm chart \"provided by Redhat\" is misspelled\n1906732 - Machine API proxy support should be tested\n1906745 - Update Helm endpoints to use Helm 3.4.x\n1906760 - performance issues with topology constantly re-rendering\n1906766 - localized `Autoscaled` \u0026 `Autoscaling` pod texts overlap with the pod ring\n1906768 - Virtualization nav item is incorrectly placed in the Admin Workloads section\n1906769 - topology fails to load with non-kubeadmin user\n1906770 - shortcuts on mobiles view occupies a lot of space\n1906798 - Dev catalog customization doesn\u0027t update console-config ConfigMap\n1906806 - Allow installing extra packages in ironic container images\n1906808 - [test-disabled] ServiceAccounts should support OIDC discovery of service account issuer\n1906835 - Topology view shows add page before then showing full project workloads\n1906840 - ClusterOperator should not have status \"Updating\" if operator version is the same as the release version\n1906844 - EndpointSlice and EndpointSliceProxying feature gates should be disabled for openshift-sdn kube-proxy\n1906860 - Bump kube dependencies to v1.20 for Net Edge components\n1906864 - Quick Starts Tour: Need to adjust vertical spacing\n1906866 - Translations of Sample-Utils\n1906871 - White screen when sort by name in monitoring alerts page\n1906872 - Pipeline Tech Preview Badge Alignment\n1906875 - Provide an option to force backup even when API is not available. \n1906877 - Placeholder\u0027 value in search filter do not match column heading in Vulnerabilities\n1906879 - Add missing i18n keys\n1906880 - oidcdiscoveryendpoint controller invalidates all TokenRequest API tokens during install\n1906896 - No Alerts causes odd empty Table (Need no content message)\n1906898 - Missing User RoleBindings in the Project Access Web UI\n1906899 - Quick Start - Highlight Bounding Box Issue\n1906916 - Teach CVO about flowcontrol.apiserver.k8s.io/v1beta1\n1906933 - Cluster Autoscaler should have improved mechanisms for group identifiers\n1906935 - Delete resources when Provisioning CR is deleted\n1906968 - Must-gather should support collecting kubernetes-nmstate resources\n1906986 - Ensure failed pod adds are retried even if the pod object doesn\u0027t change\n1907199 - Need to upgrade machine-api-operator module version under cluster-api-provider-kubevirt\n1907202 - configs.imageregistry.operator.openshift.io cluster does not update its status fields after URL change\n1907211 - beta promotion of p\u0026f switched storage version to v1beta1, making downgrades impossible. \n1907269 - Tooltips data are different when checking stack or not checking stack for the same time\n1907280 - Install tour of OCS not available. \n1907282 - Topology page breaks with white screen\n1907286 - The default mhc machine-api-termination-handler couldn\u0027t watch spot instance\n1907287 - [csi-snapshot-webhook] should support both v1beta1 and v1 version when creating volumesnapshot/volumesnapshotcontent\n1907293 - Increase timeouts in e2e tests\n1907295 - Gherkin script for improve management for helm\n1907299 - Advanced Subscription Badge for KMS and Arbiter not present\n1907303 - Align VM template list items by baseline\n1907304 - Use PF styles for selected template card in VM Wizard\n1907305 - Drop \u0027ISO\u0027 from CDROM boot source message\n1907307 - Support and provider labels should be passed on between templates and sources\n1907310 - Pin action should be renamed to favorite\n1907312 - VM Template source popover is missing info about added date\n1907313 - ClusterOperator objects cannot be overriden with cvo-overrides\n1907328 - iproute-tc package is missing in ovn-kube image\n1907329 - CLUSTER_PROFILE env. variable is not used by the CVO\n1907333 - Node stuck in degraded state, mcp reports \"Failed to remove rollback: error running rpm-ostree cleanup -r: error: Timeout was reached\"\n1907373 - Rebase to kube 1.20.0\n1907375 - Bump to latest available 1.20.x k8s - workloads team\n1907378 - Gather netnamespaces networking info\n1907380 - kube-rbac-proxy exposes tokens, has excessive verbosity\n1907381 - OLM fails to deploy an operator if its deployment template contains a description annotation that doesn\u0027t match the CSV one\n1907390 - prometheus-adapter: panic after k8s 1.20 bump\n1907399 - build log icon link on topology nodes cause app to reload\n1907407 - Buildah version not accessible\n1907421 - [4.6.1]oc-image-mirror command failed on \"error: unable to copy layer\"\n1907453 - Dev Perspective -\u003e running vm details -\u003e resources -\u003e no data\n1907454 - Install PodConnectivityCheck CRD with CNO\n1907459 - \"The Boot source is also maintained by Red Hat.\" is always shown for all boot sources\n1907475 - Unable to estimate the error rate of ingress across the connected fleet\n1907480 - `Active alerts` section throwing forbidden error for users. \n1907518 - Kamelets/Eventsource should be shown to user if they have create access\n1907543 - Korean timestamps are shown when users\u0027 language preferences are set to German-en-en-US\n1907610 - Update kubernetes deps to 1.20\n1907612 - Update kubernetes deps to 1.20\n1907621 - openshift/installer: bump cluster-api-provider-kubevirt version\n1907628 - Installer does not set primary subnet consistently\n1907632 - Operator Registry should update its kubernetes dependencies to 1.20\n1907639 - pass dual-stack node IPs to kubelet in dual-stack clusters\n1907644 - fix up handling of non-critical annotations on daemonsets/deployments\n1907660 - Pod list does not render cell height correctly when pod names are too long (dynamic table rerendering issue?)\n1907670 - CVE-2020-27846 crewjam/saml: authentication bypass in saml authentication\n1907671 - Ingress VIP assigned to two infra nodes simultaneously - keepalived process running in pods seems to fail\n1907767 - [e2e][automation]update test suite for kubevirt plugin\n1907770 - Recent RHCOS 47.83 builds (from rhcos-47.83.202012072210-0 on) don\u0027t allow master and worker nodes to boot\n1907792 - The `overrides` of the OperatorCondition cannot block the operator upgrade\n1907793 - Surface support info in VM template details\n1907812 - 4.7 to 4.6 downgrade stuck in clusteroperator storage\n1907822 - [OCP on OSP] openshift-install panic when checking quota with install-config have no flavor set\n1907863 - Quickstarts status not updating when starting the tour\n1907872 - dual stack with an ipv6 network fails on bootstrap phase\n1907874 - QE - Design Gherkin Scenarios for epic ODC-5057\n1907875 - No response when try to expand pvc with an invalid size\n1907876 - Refactoring record package to make gatherer configurable\n1907877 - QE - Automation- pipelines builder scripts\n1907883 - Fix Pipleine creation without namespace issue\n1907888 - Fix pipeline list page loader\n1907890 - Misleading and incomplete alert message shown in pipeline-parameters and pipeline-resources form\n1907892 - Unable to edit application deployed using \"From Devfile\" option\n1907893 - navSortUtils.spec.ts unit test failure\n1907896 - When a workload is added, Topology does not place the new items well\n1907908 - VM Wizard always uses VirtIO for the VM rootdisk regardless what is defined in common-template\n1907924 - Enable madvdontneed in OpenShift Images\n1907929 - Enable madvdontneed in OpenShift System Components Part 2\n1907936 - NTO is not reporting nto_profile_set_total metrics correctly after reboot\n1907947 - The kubeconfig saved in tenantcluster shouldn\u0027t include anything that is not related to the current context\n1907948 - OCM-O bump to k8s 1.20\n1907952 - bump to k8s 1.20\n1907972 - Update OCM link to open Insights tab\n1907989 - DataVolumes was intorduced in common templates - VM creation fails in the UI\n1907998 - Gather kube_pod_resource_request/limit metrics as exposed in upstream KEP 1916\n1908001 - [CVE-2020-10749] Update github.com/containernetworking/plugins to v.0.8.6 in egress-router-cni\n1908014 - e2e-aws-ansible and e2e-aws-helm are broken in ocp-release-operator-sdk\n1908035 - dynamic-demo-plugin build does not generate dist directory\n1908135 - quick search modal is not centered over topology\n1908145 - kube-scheduler-recovery-controller container crash loop when router pod is co-scheduled\n1908159 - [AWS C2S] MCO fails to sync cloud config\n1908171 - GCP: Installation fails when installing cluster with n1-custom-4-16384custom type (n1-custom-4-16384)\n1908180 - Add source for template is stucking in preparing pvc\n1908217 - CI: Server-Side Apply should work for oauth.openshift.io/v1: has no tokens\n1908231 - [Migration] The pods ovnkube-node are in  CrashLoopBackOff after SDN to OVN\n1908277 - QE - Automation- pipelines actions scripts\n1908280 - Documentation describing `ignore-volume-az` is incorrect\n1908296 - Fix pipeline builder form yaml switcher validation issue\n1908303 - [CVE-2020-28367 CVE-2020-28366] Remove CGO flag from rhel Dockerfile in Egress-Router-CNI\n1908323 - Create button missing for PLR in the search page\n1908342 - The new pv_collector_total_pv_count is not reported via telemetry\n1908344 - [vsphere-problem-detector] CheckNodeProviderID and CheckNodeDiskUUID have the same name\n1908347 - CVO overwrites ValidatingWebhookConfiguration for snapshots\n1908349 - Volume snapshot tests are failing after 1.20 rebase\n1908353 - QE - Automation- pipelines runs scripts\n1908361 - bump to k8s 1.20\n1908367 - QE - Automation- pipelines triggers scripts\n1908370 - QE - Automation- pipelines secrets scripts\n1908375 - QE - Automation- pipelines workspaces scripts\n1908381 - Go Dependency Fixes for Devfile Lib\n1908389 - Loadbalancer Sync failing on Azure\n1908400 - Tests-e2e, increase timeouts, re-add TestArchiveUploadedAndResultsReceived\n1908407 - Backport Upstream 95269 to fix potential crash in kubelet\n1908410 - Exclude Yarn from VSCode search\n1908425 - Create Role Binding form subject type and name are undefined when All Project is selected\n1908431 - When the marketplace-operator pod get\u0027s restarted, the custom catalogsources are gone, as well as the pods\n1908434 - Remove \u0026apos from metal3-plugin internationalized strings\n1908437 - Operator backed with no icon has no badge associated with the CSV tag\n1908459 - bump to k8s 1.20\n1908461 - Add bugzilla component to OWNERS file\n1908462 - RHCOS 4.6 ostree removed dhclient\n1908466 - CAPO AZ Screening/Validating\n1908467 - Zoom in and zoom out in topology package should be sentence case\n1908468 - [Azure][4.7] Installer can\u0027t properly parse instance type with non integer memory size\n1908469 - nbdb failed to come up while bringing up OVNKubernetes cluster\n1908471 - OLM should bump k8s dependencies to 1.20\n1908484 - oc adm release extract --cloud=aws --credentials-requests dumps all manifests\n1908493 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1908545 - VM clone dialog does not open\n1908557 - [e2e][automation]Miss css id on bootsource and reviewcreate step on wizard\n1908562 - Pod readiness is not being observed in real world cases\n1908565 - [4.6] Cannot filter the platform/arch of the index image\n1908573 - Align the style of flavor\n1908583 - bootstrap does not run on additional networks if configured for master in install-config\n1908596 - Race condition on operator installation\n1908598 - Persistent Dashboard shows events for all provisioners\n1908641 - Go back to Catalog Page link on Virtual Machine page vanishes on empty state\n1908648 - Skip TestKernelType test on OKD, adjust TestExtensions\n1908650 - The title of customize wizard is inconsistent\n1908654 - cluster-api-provider: volumes and disks names shouldn\u0027t change by machine-api-operator\n1908675 - Reenable [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default [Suite:openshift/conformance/parallel] [Suite:k8s]\n1908687 - Option to save user settings separate when using local bridge (affects console developers only)\n1908697 - Show `kubectl diff ` command in the oc diff help page\n1908715 - Pressing the arrow up key when on topmost quick-search list item it should loop back to bottom\n1908716 - UI breaks on click of sidebar of ksvc (if revisions not up) in topology on 4.7 builds\n1908717 - \"missing unit character in duration\" error in some network dashboards\n1908746 - [Safari] Drop Shadow doesn\u0027t works as expected on hover on workload\n1908747 - stale S3 CredentialsRequest in CCO manifest\n1908758 - AWS: NLB timeout value is rejected by AWS cloud provider after 1.20 rebase\n1908830 - RHCOS 4.6 - Missing Initiatorname\n1908868 - Update empty state message for EventSources and Channels tab\n1908880 - 4.7 aws-serial CI: NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes\n1908883 - CVE-2020-29652 golang: crypto/ssh: crafted authentication request can lead to nil pointer dereference\n1908888 - Dualstack does not work with multiple gateways\n1908889 - Bump CNO to k8s 1.20\n1908891 - TestDNSForwarding DNS operator e2e test is failing frequently\n1908914 - CNO: upgrade nodes before masters\n1908918 - Pipeline builder yaml view sidebar is not responsive\n1908960 - QE - Design Gherkin Scenarios\n1908971 - Gherkin Script for pipeline debt 4.7\n1908983 - i18n: Add Horizontal Pod Autoscaler action menu is not translated\n1908997 - Unsupported access mode should not be available when creating pvc by cinder-csi-driver/gcp-pd-csi-driver from web-console\n1908998 - [cinder-csi-driver] doesn\u0027t detect the credentials change\n1909004 - \"No datapoints found\" for RHEL node\u0027s filesystem graph\n1909005 - i18n: workloads list view heading is not translated\n1909012 - csi snapshot webhook does not block any invalid update for volumesnapshot and volumesnapshotcontent objects\n1909027 - Disks option of Sectected capacity chart shows HDD disk even on selection of SDD disk type\n1909043 - OCP + OCS 4.7 Internal - Storage cluster creation throws warning when zone=0 in VMware\n1909067 - Web terminal should keep latest output when connection closes\n1909070 - PLR and TR Logs component is not streaming as fast as tkn\n1909092 - Error Message should not confuse user on Channel form\n1909096 - OCP 4.7+OCS 4.7 - The Requested Cluster Capacity field needs to include the selected capacity in calculation in Review and Create Page\n1909108 - Machine API components should use 1.20 dependencies\n1909116 - Catalog Sort Items dropdown is not aligned on Firefox\n1909198 - Move Sink action option is not working\n1909207 - Accessibility Issue on monitoring page\n1909236 - Remove pinned icon overlap on resource name\n1909249 - Intermittent packet drop from pod to pod\n1909276 - Accessibility Issue on create project modal\n1909289 - oc debug of an init container no longer works\n1909290 - Logging may be broken due to mix of k8s.io/klog v1 and v2\n1909358 - registry.redhat.io/redhat/community-operator-index:latest only have hyperfoil-bundle\n1909453 - Boot disk RAID can corrupt ESP if UEFI firmware writes to it\n1909455 - Boot disk RAID will not boot if the primary disk enumerates but fails I/O\n1909464 - Build operator-registry with golang-1.15\n1909502 - NO_PROXY is not matched between bootstrap and global cluster setting which lead to desired master machineconfig is not found\n1909521 - Add kubevirt cluster type for e2e-test workflow\n1909527 - [IPI Baremetal] After upgrade from 4.6 to 4.7 metal3 pod does not get created\n1909587 - [OCP4] all of the OCP master nodes with soft-anti-affinity run on the same OSP node\n1909610 - Fix available capacity when no storage class selected\n1909678 - scale up / down buttons available on pod details side panel\n1909723 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1909730 - unbound variable error if EXTRA_PKGS_LIST is not defined\n1909739 - Arbiter request data changes\n1909744 - cluster-api-provider-openstack: Bump gophercloud\n1909790 - PipelineBuilder yaml view cannot be used for editing a pipeline\n1909791 - Update standalone kube-proxy config for EndpointSlice\n1909792 - Empty states for some details page subcomponents are not i18ned\n1909815 - Perspective switcher is only half-i18ned\n1909821 - OCS 4.7 LSO installation blocked because of Error \"Invalid value: \"integer\": spec.flexibleScaling in body\n1909836 - operator-install-global Cypress test was failing in OLM as it depends on an operator that isn\u0027t installed in CI\n1909864 - promote-release-openshift-machine-os-content-e2e-aws-4.5 is perm failing\n1909911 - [OVN]EgressFirewall caused a segfault\n1909943 - Upgrade from 4.6 to 4.7 stuck due to write /sys/devices/xxxx/block/sda/queue/scheduler: invalid argument\n1909958 - Support Quick Start Highlights Properly\n1909978 - ignore-volume-az = yes not working on standard storageClass\n1909981 - Improve statement in template select step\n1909992 - Fail to pull the bundle image when using the private index image\n1910024 - Reload issue in latest(4.7) UI code on 4.6 cluster locally in dev\n1910036 - QE - Design Gherkin Scenarios ODC-4504\n1910049 - UPI: ansible-galaxy is not supported\n1910127 - [UPI on oVirt]:  Improve UPI Documentation\n1910140 - fix the api dashboard with changes in upstream kube 1.20\n1910160 - If two OperatorConditions include the same deployments they will keep updating the deployment\u0027s containers with the OPERATOR_CONDITION_NAME Environment Variable\n1910165 - DHCP to static lease script doesn\u0027t handle multiple addresses\n1910305 - [Descheduler] - The minKubeVersion should be 1.20.0\n1910409 - Notification drawer is not localized for i18n\n1910459 - Could not provision gcp volume if delete secret gcp-pd-cloud-credentials\n1910492 - KMS details are auto-populated on the screen in next attempt at Storage cluster creation\n1910501 - Installed Operators-\u003eOperand required: Clicking on cancel in Storage cluster page takes back to the Install Operator page\n1910533 - [OVN] It takes about 5 minutes for EgressIP failover to work\n1910581 - library-go: proxy ENV is not injected into csi-driver-controller which lead to storage operator never get ready\n1910666 - Creating a Source Secret from type SSH-Key should use monospace font for better usability\n1910738 - OCP 4.7 Installation fails on VMWare due to 1 worker that is degraded\n1910739 - Redfish-virtualmedia (idrac) deploy fails on \"The Virtual Media image server is already connected\"\n1910753 - Support Directory Path to Devfile\n1910805 - Missing translation for Pipeline status and breadcrumb text\n1910829 - Cannot delete a PVC if the dv\u0027s phase is WaitForFirstConsumer\n1910840 - Show Nonexistent  command info in the `oc rollback -h` help page\n1910859 - breadcrumbs doesn\u0027t use last namespace\n1910866 - Unify templates string\n1910870 - Unify template dropdown action\n1911016 - Prometheus unable to mount NFS volumes after upgrading to 4.6\n1911129 - Monitoring charts renders nothing when switching from a Deployment to \"All workloads\"\n1911176 - [MSTR-998] Wrong text shown when hovering on lines of charts in API Performance dashboard\n1911212 - [MSTR-998] API Performance Dashboard \"Period\" drop-down has a choice \"$__auto_interval_period\" which can bring \"1:154: parse error: missing unit character in duration\"\n1911213 - Wrong and misleading warning for VMs that were created manually (not from template)\n1911257 - [aws-c2s] failed to create cluster, kube-cloud-config was not created\n1911269 - waiting for the build message present when build exists\n1911280 - Builder images are not detected for Dotnet, Httpd, NGINX\n1911307 - Pod Scale-up requires extra privileges in OpenShift web-console\n1911381 - \"Select Persistent Volume Claim project\" shows in customize wizard when select a source available template\n1911382 - \"source volumeMode (Block) and target volumeMode (Filesystem) do not match\" shows in VM Error\n1911387 - Hit error - \"Cannot read property \u0027value\u0027 of undefined\" while creating VM from template\n1911408 - [e2e][automation] Add auto-clone cli tests and new flow of VM creation\n1911418 - [v2v] The target storage class name is not displayed if default storage class is used\n1911434 - git ops empty state page displays icon with watermark\n1911443 - SSH Cretifiaction field should be validated\n1911465 - IOPS display wrong unit\n1911474 - Devfile Application Group Does Not Delete Cleanly (errors)\n1911487 - Pruning Deployments should use ReplicaSets instead of ReplicationController\n1911574 - Expose volume mode  on Upload Data form\n1911617 - [CNV][UI] Failure to add source to VM template when no default storage class is defined\n1911632 - rpm-ostree command fail due to wrong options when updating ocp-4.6 to 4.7 on worker nodes with rt-kernel\n1911656 - using \u0027operator-sdk run bundle\u0027 to install operator successfully, but the command output said \u0027Failed to run bundle\u0027\u0027\n1911664 - [Negative Test] After deleting metal3 pod, scaling worker stuck on provisioning state\n1911782 - Descheduler should not evict pod used local storage by the PVC\n1911796 - uploading flow being displayed before submitting the form\n1912066 - The ansible type operator\u0027s manager container is not stable when managing the CR\n1912077 - helm operator\u0027s default rbac forbidden\n1912115 - [automation] Analyze job keep failing because of \u0027JavaScript heap out of memory\u0027\n1912237 - Rebase CSI sidecars for 4.7\n1912381 - [e2e][automation] Miss css ID on Create Network Attachment Definition page\n1912409 - Fix flow schema deployment\n1912434 - Update guided tour modal title\n1912522 - DNS Operator e2e test: TestCoreDNSImageUpgrade is fundamentally broken\n1912523 - Standalone pod status not updating in topology graph\n1912536 - Console Plugin CR for console-demo-plugin has wrong apiVersion\n1912558 - TaskRun list and detail screen doesn\u0027t show Pending status\n1912563 - p\u0026f: carry 97206: clean up executing request on panic\n1912565 - OLM macOS local build broken by moby/term dependency\n1912567 - [OCP on RHV] Node becomes to \u0027NotReady\u0027 status when shutdown vm from RHV UI only on the second deletion\n1912577 - 4.1/4.2-\u003e4.3-\u003e...-\u003e 4.7 upgrade is stuck during 4.6-\u003e4.7 with co/openshift-apiserver Degraded, co/network not Available and several other components pods CrashLoopBackOff\n1912590 - publicImageRepository not being populated\n1912640 - Go operator\u0027s controller pods is forbidden\n1912701 - Handle dual-stack configuration for NIC IP\n1912703 - multiple queries can\u0027t be plotted in the same graph under some conditons\n1912730 - Operator backed: In-context should support visual connector if SBO is not installed\n1912828 - Align High Performance VMs with High Performance in RHV-UI\n1912849 - VM from wizard - default flavor does not match the actual flavor set by common templates\n1912852 - VM from wizard - available VM templates - \"storage\" field is \"0 B\"\n1912888 - recycler template should be moved to KCM operator\n1912907 - Helm chart repository index can contain unresolvable relative URL\u0027s\n1912916 - Set external traffic policy to cluster for IBM platform\n1912922 - Explicitly specifying the operator generated default certificate for an ingress controller breaks the ingress controller\n1912938 - Update confirmation modal for quick starts\n1912942 - cluster-storage-operator: proxy ENV is not injected into vsphere-problem-detector deployment\n1912944 - cluster-storage-operator: proxy ENV is not injected into Manila CSI driver operator deployment\n1912945 - aws-ebs-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912946 - gcp-pd-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912947 - openstack-cinder-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912948 - csi-driver-manila-operator: proxy ENV is not injected into the CSI driver\n1912949 - ovirt-csi-driver-operator: proxy ENV is not injected into the CSI driver\n1912977 - rebase upstream static-provisioner\n1913006 - Remove etcd v2 specific alerts with etcd_http* metrics\n1913011 - [OVN] Pod\u0027s external traffic not use egressrouter macvlan ip as a source ip\n1913037 - update static-provisioner base image\n1913047 - baremetal clusteroperator progressing status toggles between true and false when cluster is in a steady state\n1913085 - Regression OLM uses scoped client for CRD installation\n1913096 - backport: cadvisor machine metrics are missing in k8s 1.19\n1913132 - The installation of Openshift Virtualization reports success early before it \u0027s succeeded eventually\n1913154 - Upgrading to 4.6.10 nightly failed with RHEL worker nodes: Failed to find /dev/disk/by-label/root\n1913196 - Guided Tour doesn\u0027t handle resizing of browser\n1913209 - Support modal should be shown for community supported templates\n1913226 - [Migration] The SDN migration rollback failed if customize vxlanPort\n1913249 - update info alert this template is not aditable\n1913285 - VM list empty state should link to virtualization quick starts\n1913289 - Rebase AWS EBS CSI driver for 4.7\n1913292 - OCS 4.7 Installation failed over vmware when arbiter was enabled, as flexibleScaling is also getting enabled\n1913297 - Remove restriction of taints for arbiter node\n1913306 - unnecessary scroll bar is present on quick starts panel\n1913325 - 1.20 rebase for openshift-apiserver\n1913331 - Import from git: Fails to detect Java builder\n1913332 - Pipeline visualization breaks the UI when multiple taskspecs are used\n1913343 - (release-4.7) Added changelog file for insights-operator\n1913356 - (release-4.7) Implemented gathering specific logs from openshift apiserver operator\n1913371 - Missing i18n key \"Administrator\" in namespace \"console-app\" and language \"en.\"\n1913386 - users can see metrics of namespaces for which they don\u0027t have rights when monitoring own services with prometheus user workloads\n1913420 - Time duration setting of resources is not being displayed\n1913536 - 4.6.9 -\u003e 4.7 upgrade hangs.  RHEL 7.9 worker stuck on \"error enabling unit: Failed to execute operation: File exists\\\\n\\\"\n1913554 - Recording rule for ingress error fraction SLI is incorrect, uses irate instead of increase\n1913560 - Normal user cannot load template on the new wizard\n1913563 - \"Virtual Machine\" is not on the same line in create button when logged with normal user\n1913567 - Tooltip data should be same for line chart or stacked chart, display data value same as the table\n1913568 - Normal user cannot create template\n1913582 - [Migration]SDN to OVN migration stucks on MCO for rhel worker\n1913585 - Topology descriptive text fixes\n1913608 - Table data contains data value None after change time range in graph and change back\n1913651 - Improved Red Hat image and crashlooping OpenShift pod collection\n1913660 - Change location and text of Pipeline edit flow alert\n1913685 - OS field not disabled when creating a VM from a template\n1913716 - Include additional use of existing libraries\n1913725 - Refactor Insights Operator Plugin states\n1913736 - Regression: fails to deploy computes when using root volumes\n1913747 - Update operator to kubernetes 1.20.1 to pickup upstream fixes\n1913751 - add third-party network plugin test suite to openshift-tests\n1913783 - QE-To fix the merging pr issue, commenting the afterEach() block\n1913807 - Template support badge should not be shown for community supported templates\n1913821 - Need definitive steps about uninstalling descheduler operator\n1913851 - Cluster Tasks are not sorted in pipeline builder\n1913864 - BuildConfig YAML template references ruby ImageStreamTag that no longer exists\n1913951 - Update the Devfile Sample Repo to an Official Repo Host\n1913960 - Cluster Autoscaler should use 1.20 dependencies\n1913969 - Field dependency descriptor can sometimes cause an exception\n1914060 - Disk created from \u0027Import via Registry\u0027 cannot be used as boot disk\n1914066 - [sriov] sriov dp pod crash when delete ovs HW offload policy\n1914090 - Grafana - The resulting dataset is too large to graph (OCS RBD volumes being counted as disks)\n1914119 - vsphere problem detector operator has no permission to update storages.operator.openshift.io instances\n1914125 - Still using /dev/vde as default device path when create localvolume\n1914183 - Empty NAD page is missing link to quickstarts\n1914196 - target port in `from dockerfile` flow does nothing\n1914204 - Creating VM from dev perspective may fail with template not found error\n1914209 - Associate image secret name to pipeline serviceaccount imagePullSecrets\n1914212 - [e2e][automation] Add test to validate bootable disk souce\n1914250 - ovnkube-node fails on master nodes when both DHCPv6 and SLAAC addresses are configured on nodes\n1914284 - Upgrade to OCP 4.6.9 results in cluster-wide DNS and connectivity issues due to bad NetworkPolicy flows\n1914287 - Bring back selfLink\n1914301 - User VM Template source should show the same provider as template itself\n1914303 - linuxptp-daemon is not forwarding ptp4l stderr output to openshift logs\n1914309 - /terminal page when WTO not installed shows nonsensical error\n1914334 - order of getting started samples is arbitrary\n1914343 - [sig-imageregistry][Feature:ImageTriggers] Annotation trigger reconciles after the image is overwritten [Suite:openshift/conformance/parallel]  timeout on s390x\n1914349 - Increase and decrease buttons in max and min pods in HPA page has distorted UI\n1914405 - Quick search modal should be opened when coming back from a selection\n1914407 - Its not clear that node-ca is running as non-root\n1914427 - Count of pods on the dashboard is incorrect\n1914439 - Typo in SRIOV port create command example\n1914451 - cluster-storage-operator pod running as root\n1914452 - oc image append, oc image extract outputs wrong suggestion to use --keep-manifest-list=true\n1914642 - Customize Wizard Storage tab does not pass validation\n1914723 - SamplesTBRInaccessibleOnBoot Alert has a misspelling\n1914793 - device names should not be translated\n1914894 - Warn about using non-groupified api version\n1914926 - webdriver-manager pulls incorrect version of ChomeDriver due to a bug\n1914932 - Put correct resource name in relatedObjects\n1914938 - PVC disk is not shown on customization wizard general tab\n1914941 - VM Template rootdisk is not deleted after fetching default disk bus\n1914975 - Collect logs from openshift-sdn namespace\n1915003 - No estimate of average node readiness during lifetime of a cluster\n1915027 - fix MCS blocking iptables rules\n1915041 - s3:ListMultipartUploadParts is relied on implicitly\n1915079 - Canary controller should not periodically rotate the canary route endpoint for performance reasons\n1915080 - Large number of tcp connections with shiftstack ocp cluster in about 24 hours\n1915085 - Pods created and rapidly terminated get stuck\n1915114 - [aws-c2s] worker machines are not create during install\n1915133 - Missing default pinned nav items in dev perspective\n1915176 - Update snapshot API CRDs to v1 in web-console when creating volumesnapshot related resource\n1915187 - Remove the \"Tech preview\" tag in web-console for volumesnapshot\n1915188 - Remove HostSubnet anonymization\n1915200 - [OCP 4.7+ OCS 4.6]Arbiter related Note should not show up during UI deployment\n1915217 - OKD payloads expect to be signed with production keys\n1915220 - Remove dropdown workaround for user settings\n1915235 - Failed to upgrade to 4.7 from 4.6 due to the machine-config failure\n1915262 - When deploying with assisted install the CBO operator is installed and enabled without metal3 pod\n1915277 - [e2e][automation]fix cdi upload form test\n1915295 - [BM][IP][Dualstack] Installation failed - operators report dial tcp 172.30.0.1:443: i/o timeout\n1915304 - Updating scheduling component builder \u0026 base images to be consistent with ART\n1915312 - Prevent schedule Linux openshift-network-diagnostics pod on Windows node\n1915318 - [Metal] bareMetal IPI - cannot interact with toolbox container after first execution only in parallel from different connection\n1915348 - [RFE] linuxptp operator needs to expose the uds_address_socket to be used by an application pod\n1915357 - Dev Catalog doesn\u0027t load anything if virtualization operator is installed\n1915379 - New template wizard should require provider and make support input a dropdown type\n1915408 - Failure in operator-registry kind e2e test\n1915416 - [Descheduler] descheduler evicts pod which does not have any ownerRef or descheduler evict annotation\n1915460 - Cluster name size might affect installations\n1915500 - [aws c2s] kube-controller-manager crash loops trying to fetch the AWS instance\n1915540 - Silent 4.7 RHCOS install failure on ppc64le\n1915579 - [Metal] redhat-support-tool became unavailable after tcpdump usage (BareMetal IPI)\n1915582 - p\u0026f: carry upstream pr 97860\n1915594 - [e2e][automation] Improve test for disk validation\n1915617 - Bump bootimage for various fixes\n1915624 - \"Please fill in the following field: Template provider\" blocks customize wizard\n1915627 - Translate Guided Tour text. \n1915643 - OCP4.6 to 4.7 upgrade failed due to manila csi driver operator sync error\n1915647 - Intermittent White screen when the connector dragged to revision\n1915649 - \"Template support\" pop up is not a warning; checkbox text should be rephrased\n1915654 - [e2e][automation] Add a verification for Afinity modal should hint \"Matching node found\"\n1915661 - Can\u0027t run the \u0027oc adm prune\u0027 command in a pod\n1915672 - Kuryr doesn\u0027t work with selfLink disabled. \n1915674 - Golden image PVC creation - storage size should be taken from the template\n1915685 - Message for not supported template is not clear enough\n1915760 - Need to increase timeout to wait rhel worker get ready\n1915793 - quick starts panel syncs incorrectly across browser windows\n1915798 - oauth connection errors for openshift console pods on an OVNKube OCP 4.7 cluster\n1915818 - vsphere-problem-detector: use \"_totals\" in metrics\n1915828 - Latest Dell firmware (04.40.00.00) fails to install IPI on BM using idrac-virtualmedia protocol\n1915859 - vsphere-problem-detector: does not report ESXi host version nor VM HW version\n1915871 - operator-sdk version in new downstream image should be v1.2.0-ocp not v4.7.0\n1915879 - Pipeline Dashboard tab Rename to Pipeline Metrics\n1915885 - Kuryr doesn\u0027t support workers running on multiple subnets\n1915898 - TaskRun log output shows \"undefined\" in streaming\n1915907 - test/cmd/builds.sh uses docker.io\n1915912 - sig-storage-csi-snapshotter image not available\n1915926 - cluster-api-provider-openstack: Update ose-openstack-machine-controllers builder \u0026 base images to be consistent with ART\n1915929 - A11y Violation: svg-img-alt for time axis of Utilization Card on Cluster Dashboard\n1915939 - Resizing the browser window removes Web Terminal Icon\n1915945 - [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]\n1915959 - Baremetal cluster operator is included in a ROKS installation of 4.7\n1915962 - ROKS: manifest with machine health check fails to apply in 4.7\n1915972 - Global configuration breadcrumbs do not work as expected\n1915981 - Install ethtool and conntrack in container for debugging\n1915995 - \"Edit RoleBinding Subject\" action under RoleBinding list page kebab actions causes unhandled exception\n1915998 - Installer bootstrap node setting of additional subnets inconsistent with additional security groups\n1916021 - OLM enters infinite loop if Pending CSV replaces itself\n1916056 - Need Visual Web Terminal metric enabled for OCP monitoring telemetry\n1916081 - non-existant should be non-existent in CloudCredentialOperatorTargetNamespaceMissing alert\u0027s annotations\n1916099 - VM creation - customization wizard - user should be allowed to delete and re-create root disk\n1916126 - [e2e][automation] Help fix tests for vm guest-agent and next-run-configuration\n1916145 - Explicitly set minimum versions of python libraries\n1916164 - Update csi-driver-nfs builder \u0026 base images to be consistent with ART\n1916221 - csi-snapshot-controller-operator: bump dependencies for 4.7\n1916271 - Known issues should mention failure to apply soft-anti-affinity to masters beyond the third\n1916363 - [OVN] ovs-configuration.service reports as failed within all nodes using version 4.7.0-fc.2\n1916379 - error metrics from vsphere-problem-detector should be gauge\n1916382 - Can\u0027t create ext4 filesystems with Ignition\n1916384 - 4.5.15 and later cluster-version operator does not sync ClusterVersion status before exiting, leaving \u0027verified: false\u0027 even for verified updates\n1916401 - Deleting an ingress controller with a bad DNS Record hangs\n1916417 - [Kuryr] Must-gather does not have all Custom Resources information\n1916419 - [sig-devex][Feature:ImageEcosystem][Slow] openshift images should be SCL enabled returning s2i usage when running the image\n1916454 - teach CCO about upgradeability from 4.6 to 4.7\n1916486 - [OCP RHV] [Docs] Update RHV CSI provisioning section in OCP documenation\n1916502 - Boot disk mirroring fails with mdadm error\n1916524 - Two rootdisk shows on storage step\n1916580 - Default yaml is broken for VM and VM template\n1916621 - oc adm node-logs examples are wrong\n1916642 - [zh_CN] Redundant period in Secrets - Create drop down menu - Key value secret. \n1916692 - Possibly fails to destroy LB and thus cluster\n1916711 - Update Kube dependencies in MCO to 1.20.0\n1916747 - remove links to quick starts if virtualization operator isn\u0027t updated to 2.6\n1916764 - editing a workload with no application applied, will auto fill the app\n1916834 - Pipeline Metrics - Text Updates\n1916843 - collect logs from openshift-sdn-controller pod\n1916853 - cluster will not gracefully recover if openshift-etcd namespace is removed\n1916882 - OCS 4.7 LSO : wizard (Discover disks and create storageclass) does not show zone when topology.kubernetes.io/zone are added manually\n1916888 - OCS wizard Donor chart does not get updated when `Device Type` is edited\n1916938 - Using 4.6 install-config.yaml file with lbFloatingIP results in validation error \"Forbidden: cannot specify lbFloatingIP and apiFloatingIP together\"\n1916949 - ROKS: manifests in openshift-oauth-apiserver ns fails to create with non-existent namespace\n1917101 - [UPI on oVirt] - \u0027RHCOS image\u0027 topic isn\u0027t located in the right place in UPI document\n1917114 - Upgrade from 4.5.9 to 4.7 fails as authentication operator is Degraded due to \u0027\"ProxyConfigController\" controller failed to sync \"key\"\u0027 error\n1917117 - Common templates - disks screen: invalid disk name\n1917124 - Custom template - clone existing PVC - the name of the target VM\u0027s data volume is hard-coded; only one VM can be created\n1917146 - [oVirt] Consume 23-10 ovirt sdk- csi operator\n1917147 - [oVirt] csi operator panics if ovirt-engine suddenly becomes unavailable. \n1917148 - [oVirt] Consume 23-10 ovirt sdk\n1917239 - Monitoring time options overlaps monitoring tab navigation when Quickstart panel is opened\n1917272 - Should update the default minSize to 1Gi when create localvolumeset on web console\n1917303 - [automation][e2e] make kubevirt-plugin gating job mandatory\n1917315 - localvolumeset-local-provisoner-xxx pods are not killed after upgrading from 4.6 to 4.7\n1917327 - annotations.message maybe wrong for NTOPodsNotReady alert\n1917367 - Refactor periodic.go\n1917371 - Add docs on how to use the built-in profiler\n1917372 - Application metrics are shown on Metrics dashboard but not in linked Prometheus UI in OCP management console\n1917395 - pv-pool backing store name restriction should be at 43 characters from the ocs ui\n1917484 - [BM][IPI] Failed to scale down machineset\n1917522 - Deprecate --filter-by-os in oc adm catalog mirror\n1917537 - controllers continuously busy reconciling operator\n1917551 - use min_over_time for vsphere prometheus alerts\n1917585 - OLM Operator install page missing i18n\n1917587 - Manila CSI operator becomes degraded if user doesn\u0027t have permissions to list share types\n1917605 - Deleting an exgw causes pods to no longer route to other exgws\n1917614 - [aws c2s] ingress operator uses unavailable resourcegrouptaggings API\n1917656 - Add to Project/application for eventSources from topology shows 404\n1917658 - Show TP badge for sources powered by camel connectors in create flow\n1917660 - Editing parallelism of job get error info\n1917678 - Could not provision pv when no symlink and target found on rhel worker\n1917679 - Hide double CTA in admin pipelineruns tab\n1917683 - `NodeTextFileCollectorScrapeError` alert in OCP 4.6 cluster. \n1917759 - Console operator panics after setting plugin that does not exists to the console-operator config\n1917765 - ansible-operator version in downstream image should be v1.3.0 not v4.7.0\n1917770 - helm-operator version in downstream image should be v1.3.0 not v4.7.0\n1917799 - Gather s list of names and versions of installed OLM operators\n1917803 - [sig-storage] Pod Disks should be able to delete a non-existent PD without error\n1917814 - Show Broker create option in eventing under admin perspective\n1917838 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1917872 - [oVirt] rebase on latest SDK 2021-01-12\n1917911 - network-tools needs ovnkube-trace binary from ovn-kubernetes image\n1917938 - upgrade version of dnsmasq package\n1917942 - Canary controller causes panic in ingress-operator\n1918019 - Undesired scrollbars in markdown area of QuickStart\n1918068 - Flaky olm integration tests\n1918085 - reversed name of job and namespace in cvo log\n1918112 - Flavor is not editable if a customize VM is created from cli\n1918129 - Update IO sample archive with missing resources \u0026 remove IP anonymization from clusteroperator resources\n1918132 - i18n: Volume Snapshot Contents menu is not translated\n1918133 - [e2e][automation] Fix ocp 4.7 existing tests - part2\n1918140 - Deployment openstack-cinder-csi-driver-controller and openstack-manila-csi-controllerplugin doesn\u0027t be installed on OSP\n1918153 - When `\u0026` character is set as an environment variable in a build config it is getting converted as `\\u0026`\n1918185 - Capitalization on PLR details page\n1918287 - [ovirt] ovirt csi driver is flooding RHV with API calls and spam the event UI with new connections\n1918318 - Kamelet connector\u0027s are not shown in eventing section under Admin perspective\n1918351 - Gather SAP configuration (SCC \u0026 ClusterRoleBinding)\n1918375 - [calico] rbac-proxy container in kube-proxy fails to create tokenreviews\n1918395 - [ovirt] increase livenessProbe period\n1918415 - MCD nil pointer on dropins\n1918438 - [ja_JP, zh_CN] Serverless i18n misses\n1918440 - Kernel Arguments get reapplied even when no new kargs has been added in MachineConfig\n1918471 - CustomNoUpgrade Feature gates are not working correctly\n1918558 - Supermicro nodes boot to PXE upon reboot after successful deployment to disk\n1918622 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1918623 - Updating ose-jenkins-agent-nodejs-12 builder \u0026 base images to be consistent with ART\n1918625 - Updating ose-jenkins-agent-nodejs-10 builder \u0026 base images to be consistent with ART\n1918635 - Updating openshift-jenkins-2 builder \u0026 base images to be consistent with ART #1197\n1918639 - Event listener with triggerRef crashes the console\n1918648 - Subscription page doesn\u0027t show InstallPlan correctly\n1918716 - Manilacsi becomes degraded even though it is not available with the underlying Openstack\n1918748 - helmchartrepo is not http(s)_proxy-aware\n1918757 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1918803 - Need dedicated details page w/ global config breadcrumbs for \u0027KnativeServing\u0027 plugin\n1918826 - Insights popover icons are not horizontally aligned\n1918879 - need better debug for bad pull secrets\n1918958 - The default NMstate instance from the operator is incorrect\n1919097 - Close bracket \")\" missing at the end of the sentence in the UI\n1919231 - quick search modal cut off on smaller screens\n1919259 - Make \"Add x\" singular in Pipeline Builder\n1919260 - VM Template list actions should not wrap\n1919271 - NM prepender script doesn\u0027t support systemd-resolved\n1919341 - Updating ose-jenkins-agent-maven builder \u0026 base images to be consistent with ART\n1919360 - Need managed-cluster-info metric enabled for OCP monitoring telemetry\n1919379 - dotnet logo out of date\n1919387 - Console login fails with no error when it can\u0027t write to localStorage\n1919396 - A11y Violation: svg-img-alt on Pod Status ring\n1919407 - OpenStack IPI has three-node control plane limitation, but InstallConfigs aren\u0027t verified\n1919750 - Search InstallPlans got Minified React error\n1919778 - Upgrade is stuck in insights operator Degraded with \"Source clusterconfig could not be retrieved\" until insights operator pod is manually deleted\n1919823 - OCP 4.7 Internationalization Chinese tranlate issue\n1919851 - Visualization does not render when Pipeline \u0026 Task share same name\n1919862 - The tip information for `oc new-project  --skip-config-write` is wrong\n1919876 - VM created via customize wizard cannot inherit template\u0027s PVC attributes\n1919877 - Click on KSVC breaks with white screen\n1919879 - The toolbox container name is changed from \u0027toolbox-root\u0027  to \u0027toolbox-\u0027 in a chroot environment\n1919945 - user entered name value overridden by default value when selecting a git repository\n1919968 - [release-4.7] Undiagnosed panic detected in pod runtime.go:76: invalid memory address or nil pointer dereference\n1919970 - NTO does not update when the tuned profile is updated. \n1919999 - Bump Cluster Resource Operator Golang Versions\n1920027 - machine-config-operator consistently failing during 4.6 to 4.7 upgrades and clusters do not install successfully with proxy configuration\n1920200 - user-settings network error results in infinite loop of requests\n1920205 - operator-registry e2e tests not working properly\n1920214 - Bump golang to 1.15 in cluster-resource-override-admission\n1920248 - re-running the pipelinerun with pipelinespec crashes the UI\n1920320 - VM template field is \"Not available\" if it\u0027s created from common template\n1920367 - When creating localvolumeset instance from the web console, the title for setting volumeMode is `Disk Mode`\n1920368 - Fix containers creation issue resulting in runc running on Guaranteed Pod CPUs\n1920390 - Monitoring \u003e Metrics graph shifts to the left when clicking the \"Stacked\" option and when toggling data series lines on / off\n1920426 - Egress Router CNI OWNERS file should have ovn-k team members\n1920427 - Need to update `oc login` help page since we don\u0027t support prompt interactively for the username\n1920430 - [V2V] [UI] Browser window becomes empty when running import wizard for the first time\n1920438 - openshift-tuned panics on turning debugging on/off. \n1920445 - e2e-gcp-ovn-upgrade job is actually using openshift-sdn\n1920481 - kuryr-cni pods using unreasonable amount of CPU\n1920509 - wait for port 6443 to be open in the kube-scheduler container; use ss instead of lsof\n1920524 - Topology graph crashes adding Open Data Hub operator\n1920526 - catalog operator causing CPU spikes and bad etcd performance\n1920551 - Boot Order is not editable for Templates in \"openshift\" namespace\n1920555 - bump cluster-resource-override-admission api dependencies\n1920571 - fcp multipath will not recover failed paths automatically\n1920619 - Remove default scheduler profile value\n1920655 - Console should not show the Create Autoscaler link in cluster settings when the CRD is not present\n1920674 - MissingKey errors in bindings namespace\n1920684 - Text in language preferences modal is misleading\n1920695 - CI is broken because of bad image registry reference in the Makefile\n1920756 - update generic-admission-server library to get the system:masters authorization optimization\n1920769 - [Upgrade] OCP upgrade from 4.6.13 to 4.7.0-fc.4 for \"network-check-target\" failed when \"defaultNodeSelector\" is set\n1920771 - i18n: Delete persistent volume claim drop down is not translated\n1920806 - [OVN]Nodes lost network connection after reboot on the vSphere UPI\n1920912 - Unable to power off BMH from console\n1920981 - When OCS was deployed with arbiter mode enable add capacity is increasing the count by \"2\"\n1920984 - [e2e][automation] some menu items names are out dated\n1921013 - Gather PersistentVolume definition (if any) used in image registry config\n1921023 - Do not enable Flexible Scaling to true for Internal mode clusters(revert to 4.6 behavior)\n1921087 - \u0027start next quick start\u0027 link doesn\u0027t work and is unintuitive\n1921088 - test-cmd is failing on volumes.sh pretty consistently\n1921248 - Clarify the kubelet configuration cr description\n1921253 - Text filter default placeholder text not internationalized\n1921258 - User Preferences: Active perspective and project change in the current window when selected in a different window\n1921275 - Panic in authentication-operator in (*deploymentController).updateOperatorDeploymentInfo\n1921277 - Fix Warning and Info log statements to handle arguments\n1921281 - oc get -o yaml --export returns \"error: unknown flag: --export\"\n1921458 - [SDK] Gracefully handle the `run bundle-upgrade` if the lower version operator doesn\u0027t exist\n1921556 - [OCS with Vault]: OCS pods didn\u0027t comeup after deploying with Vault details from UI\n1921572 - For external source (i.e GitHub Source) form view as well shows yaml\n1921580 - [e2e][automation]Test VM detail view actions dropdown does not pass\n1921610 - Pipeline metrics font size inconsistency\n1921644 - [e2e][automation] tests errors with wrong cloudInit new line syntax\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921655 - [OSP] Incorrect error handling during cloudinfo generation\n1921713 - [e2e][automation]  fix failing VM migration tests\n1921762 - Serving and Eventing breadcrumbs should direct users back to tabbed page view\n1921774 - delete application modal errors when a resource cannot be found\n1921806 - Explore page APIResourceLinks aren\u0027t i18ned\n1921823 - CheckBoxControls not internationalized\n1921836 - AccessTableRows don\u0027t internationalize \"User\" or \"Group\"\n1921857 - Test flake when hitting router in e2e tests due to one router not being up to date\n1921880 - Dynamic plugins are not initialized on console load in production mode\n1921911 - Installer PR #4589 is causing leak of IAM role policy bindings\n1921921 - \"Global Configuration\" breadcrumb does not use sentence case\n1921949 - Console bug - source code URL broken for gitlab self-hosted repositories\n1921954 - Subscription-related constraints in ResolutionFailed events are misleading\n1922015 - buttons in modal header are invisible on Safari\n1922021 - Nodes terminal page \u0027Expand\u0027 \u0027Collapse\u0027 button not translated\n1922050 - [e2e][automation] Improve vm clone tests\n1922066 - Cannot create VM from custom template which has extra disk\n1922098 - Namespace selection dialog is not closed after select a namespace\n1922099 - Updated Readme documentation for QE code review and setup\n1922146 - Egress Router CNI doesn\u0027t have logging support. \n1922267 - Collect specific ADFS error\n1922292 - Bump RHCOS boot images for 4.7\n1922454 - CRI-O doesn\u0027t enable pprof by default\n1922473 - reconcile LSO images for 4.8\n1922573 - oc returns an error while using -o jsonpath when there is no resource found in the namespace\n1922782 - Source registry missing docker:// in yaml\n1922907 - Interop UI Tests - step implementation for updating feature files\n1922911 - Page crash when click the \"Stacked\" checkbox after clicking the data series toggle buttons\n1922991 - \"verify /run filesystem contents do not have unexpected content using a simple Docker Strategy Build\" test fails on OKD\n1923003 - WebConsole Insights widget showing \"Issues pending\" when the cluster doesn\u0027t report anything\n1923098 - [vsphere-problem-detector-operator] Need permission to access replicasets.apps resources\n1923102 - [vsphere-problem-detector-operator] pod\u0027s version is not correct\n1923245 - [Assisted-4.7] [Staging][Minimal-ISO] nodes fails to boot\n1923674 - k8s 1.20 vendor dependencies\n1923721 - PipelineRun running status icon is not rotating\n1923753 - Increase initialDelaySeconds for ovs-daemons container in the ovs-node daemonset for upgrade scenarios\n1923774 - Docker builds failing for openshift/cluster-resource-override-admission-operator\n1923802 - ci/prow/e2e-aws-olm build failing for openshift/cluster-resource-override-admission-operator\n1923874 - Unable to specify values with % in kubeletconfig\n1923888 - Fixes error metadata gathering\n1923892 - Update arch.md after refactor. \n1923894 - \"installed\" operator status in operatorhub page does not reflect the real status of operator\n1923895 - Changelog generation. \n1923911 - [e2e][automation] Improve tests for vm details page and list filter\n1923945 - PVC Name and Namespace resets when user changes os/flavor/workload\n1923951 - EventSources shows `undefined` in project\n1923973 - Dynamic plugin demo README does not contain info how to enable the ConsolePlugins\n1924046 - Localhost: Refreshing on a Project removes it from nav item urls\n1924078 - Topology quick search View all results footer should be sticky. \n1924081 - NTO should ship the latest Tuned daemon release 2.15\n1924084 - backend tests incorrectly hard-code artifacts dir\n1924128 - [sig-builds][Feature:Builds] verify /run filesystem contents  do not have unexpected content using a simple Docker Strategy Build\n1924135 - Under sufficient load, CRI-O may segfault\n1924143 - Code Editor Decorator url is broken for Bitbucket repos\n1924188 - Language selector dropdown doesn\u0027t always pre-select the language\n1924365 - Add extra disk for VM which use boot source PXE\n1924383 - Degraded network operator during upgrade to 4.7.z\n1924387 - [ja_JP][zh_CN] Incorrect warning message for deleting namespace on Delete Pod dialog box. \n1924480 - non cluster admin can not take VM snapshot: An error occurred, cannot set blockOwnerDeletion if an ownerReference refers to a resource you can\u0027t set finalizers on\n1924583 - Deprectaed templates are listed in the Templates screen\n1924870 - pick upstream pr#96901: plumb context with request deadline\n1924955 - Images from Private external registry not working in deploy Image\n1924961 - k8sutil.TrimDNS1123Label creates invalid values\n1924985 - Build egress-router-cni for both RHEL 7 and 8\n1925020 - Console demo plugin deployment image shoult not point to dockerhub\n1925024 - Remove extra validations on kafka source form view net section\n1925039 - [e2e] Fix Test - ID(CNV-5327) Change Custom Flavor while VM is running\n1925072 - NTO needs to ship the current latest stalld v1.7.0\n1925163 - Missing info about dev catalog in boot source template column\n1925200 - Monitoring Alert icon is missing on the workload in Topology view\n1925262 - apiserver getting 2 SIGTERM signals which was immediately making it exit code 1\n1925319 - bash syntax error in configure-ovs.sh script\n1925408 - Remove StatefulSet gatherer and replace it with gathering corresponding config map data\n1925516 - Pipeline Metrics Tooltips are overlapping data\n1925562 - Add new ArgoCD link from GitOps application environments page\n1925596 - Gitops details page image and commit id text overflows past card boundary\n1926556 - \u0027excessive etcd leader changes\u0027 test case failing in serial job because prometheus data is wiped by machine set test\n1926588 - The tarball of operator-sdk is not ready for ocp4.7\n1927456 - 4.7 still points to 4.6 catalog images\n1927500 - API server exits non-zero on 2 SIGTERM signals\n1929278 - Monitoring workloads using too high a priorityclass\n1929645 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n1929920 - Cluster monitoring documentation link is broken - 404 not found\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-10103\nhttps://access.redhat.com/security/cve/CVE-2018-10105\nhttps://access.redhat.com/security/cve/CVE-2018-14461\nhttps://access.redhat.com/security/cve/CVE-2018-14462\nhttps://access.redhat.com/security/cve/CVE-2018-14463\nhttps://access.redhat.com/security/cve/CVE-2018-14464\nhttps://access.redhat.com/security/cve/CVE-2018-14465\nhttps://access.redhat.com/security/cve/CVE-2018-14466\nhttps://access.redhat.com/security/cve/CVE-2018-14467\nhttps://access.redhat.com/security/cve/CVE-2018-14468\nhttps://access.redhat.com/security/cve/CVE-2018-14469\nhttps://access.redhat.com/security/cve/CVE-2018-14470\nhttps://access.redhat.com/security/cve/CVE-2018-14553\nhttps://access.redhat.com/security/cve/CVE-2018-14879\nhttps://access.redhat.com/security/cve/CVE-2018-14880\nhttps://access.redhat.com/security/cve/CVE-2018-14881\nhttps://access.redhat.com/security/cve/CVE-2018-14882\nhttps://access.redhat.com/security/cve/CVE-2018-16227\nhttps://access.redhat.com/security/cve/CVE-2018-16228\nhttps://access.redhat.com/security/cve/CVE-2018-16229\nhttps://access.redhat.com/security/cve/CVE-2018-16230\nhttps://access.redhat.com/security/cve/CVE-2018-16300\nhttps://access.redhat.com/security/cve/CVE-2018-16451\nhttps://access.redhat.com/security/cve/CVE-2018-16452\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2019-3884\nhttps://access.redhat.com/security/cve/CVE-2019-5018\nhttps://access.redhat.com/security/cve/CVE-2019-6977\nhttps://access.redhat.com/security/cve/CVE-2019-6978\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9455\nhttps://access.redhat.com/security/cve/CVE-2019-9458\nhttps://access.redhat.com/security/cve/CVE-2019-11068\nhttps://access.redhat.com/security/cve/CVE-2019-12614\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13225\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15165\nhttps://access.redhat.com/security/cve/CVE-2019-15166\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-15917\nhttps://access.redhat.com/security/cve/CVE-2019-15925\nhttps://access.redhat.com/security/cve/CVE-2019-16167\nhttps://access.redhat.com/security/cve/CVE-2019-16168\nhttps://access.redhat.com/security/cve/CVE-2019-16231\nhttps://access.redhat.com/security/cve/CVE-2019-16233\nhttps://access.redhat.com/security/cve/CVE-2019-16935\nhttps://access.redhat.com/security/cve/CVE-2019-17450\nhttps://access.redhat.com/security/cve/CVE-2019-17546\nhttps://access.redhat.com/security/cve/CVE-2019-18197\nhttps://access.redhat.com/security/cve/CVE-2019-18808\nhttps://access.redhat.com/security/cve/CVE-2019-18809\nhttps://access.redhat.com/security/cve/CVE-2019-19046\nhttps://access.redhat.com/security/cve/CVE-2019-19056\nhttps://access.redhat.com/security/cve/CVE-2019-19062\nhttps://access.redhat.com/security/cve/CVE-2019-19063\nhttps://access.redhat.com/security/cve/CVE-2019-19068\nhttps://access.redhat.com/security/cve/CVE-2019-19072\nhttps://access.redhat.com/security/cve/CVE-2019-19221\nhttps://access.redhat.com/security/cve/CVE-2019-19319\nhttps://access.redhat.com/security/cve/CVE-2019-19332\nhttps://access.redhat.com/security/cve/CVE-2019-19447\nhttps://access.redhat.com/security/cve/CVE-2019-19524\nhttps://access.redhat.com/security/cve/CVE-2019-19533\nhttps://access.redhat.com/security/cve/CVE-2019-19537\nhttps://access.redhat.com/security/cve/CVE-2019-19543\nhttps://access.redhat.com/security/cve/CVE-2019-19602\nhttps://access.redhat.com/security/cve/CVE-2019-19767\nhttps://access.redhat.com/security/cve/CVE-2019-19770\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-19956\nhttps://access.redhat.com/security/cve/CVE-2019-20054\nhttps://access.redhat.com/security/cve/CVE-2019-20218\nhttps://access.redhat.com/security/cve/CVE-2019-20386\nhttps://access.redhat.com/security/cve/CVE-2019-20387\nhttps://access.redhat.com/security/cve/CVE-2019-20388\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20636\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-20812\nhttps://access.redhat.com/security/cve/CVE-2019-20907\nhttps://access.redhat.com/security/cve/CVE-2019-20916\nhttps://access.redhat.com/security/cve/CVE-2020-0305\nhttps://access.redhat.com/security/cve/CVE-2020-0444\nhttps://access.redhat.com/security/cve/CVE-2020-1716\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-1751\nhttps://access.redhat.com/security/cve/CVE-2020-1752\nhttps://access.redhat.com/security/cve/CVE-2020-1971\nhttps://access.redhat.com/security/cve/CVE-2020-2574\nhttps://access.redhat.com/security/cve/CVE-2020-2752\nhttps://access.redhat.com/security/cve/CVE-2020-2922\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3898\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-6405\nhttps://access.redhat.com/security/cve/CVE-2020-7595\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-8177\nhttps://access.redhat.com/security/cve/CVE-2020-8492\nhttps://access.redhat.com/security/cve/CVE-2020-8563\nhttps://access.redhat.com/security/cve/CVE-2020-8566\nhttps://access.redhat.com/security/cve/CVE-2020-8619\nhttps://access.redhat.com/security/cve/CVE-2020-8622\nhttps://access.redhat.com/security/cve/CVE-2020-8623\nhttps://access.redhat.com/security/cve/CVE-2020-8624\nhttps://access.redhat.com/security/cve/CVE-2020-8647\nhttps://access.redhat.com/security/cve/CVE-2020-8648\nhttps://access.redhat.com/security/cve/CVE-2020-8649\nhttps://access.redhat.com/security/cve/CVE-2020-9327\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-10029\nhttps://access.redhat.com/security/cve/CVE-2020-10732\nhttps://access.redhat.com/security/cve/CVE-2020-10749\nhttps://access.redhat.com/security/cve/CVE-2020-10751\nhttps://access.redhat.com/security/cve/CVE-2020-10763\nhttps://access.redhat.com/security/cve/CVE-2020-10773\nhttps://access.redhat.com/security/cve/CVE-2020-10774\nhttps://access.redhat.com/security/cve/CVE-2020-10942\nhttps://access.redhat.com/security/cve/CVE-2020-11565\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-12465\nhttps://access.redhat.com/security/cve/CVE-2020-12655\nhttps://access.redhat.com/security/cve/CVE-2020-12659\nhttps://access.redhat.com/security/cve/CVE-2020-12770\nhttps://access.redhat.com/security/cve/CVE-2020-12826\nhttps://access.redhat.com/security/cve/CVE-2020-13249\nhttps://access.redhat.com/security/cve/CVE-2020-13630\nhttps://access.redhat.com/security/cve/CVE-2020-13631\nhttps://access.redhat.com/security/cve/CVE-2020-13632\nhttps://access.redhat.com/security/cve/CVE-2020-14019\nhttps://access.redhat.com/security/cve/CVE-2020-14040\nhttps://access.redhat.com/security/cve/CVE-2020-14381\nhttps://access.redhat.com/security/cve/CVE-2020-14382\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-14422\nhttps://access.redhat.com/security/cve/CVE-2020-15157\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-15862\nhttps://access.redhat.com/security/cve/CVE-2020-15999\nhttps://access.redhat.com/security/cve/CVE-2020-16166\nhttps://access.redhat.com/security/cve/CVE-2020-24490\nhttps://access.redhat.com/security/cve/CVE-2020-24659\nhttps://access.redhat.com/security/cve/CVE-2020-25211\nhttps://access.redhat.com/security/cve/CVE-2020-25641\nhttps://access.redhat.com/security/cve/CVE-2020-25658\nhttps://access.redhat.com/security/cve/CVE-2020-25661\nhttps://access.redhat.com/security/cve/CVE-2020-25662\nhttps://access.redhat.com/security/cve/CVE-2020-25681\nhttps://access.redhat.com/security/cve/CVE-2020-25682\nhttps://access.redhat.com/security/cve/CVE-2020-25683\nhttps://access.redhat.com/security/cve/CVE-2020-25684\nhttps://access.redhat.com/security/cve/CVE-2020-25685\nhttps://access.redhat.com/security/cve/CVE-2020-25686\nhttps://access.redhat.com/security/cve/CVE-2020-25687\nhttps://access.redhat.com/security/cve/CVE-2020-25694\nhttps://access.redhat.com/security/cve/CVE-2020-25696\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-27813\nhttps://access.redhat.com/security/cve/CVE-2020-27846\nhttps://access.redhat.com/security/cve/CVE-2020-28362\nhttps://access.redhat.com/security/cve/CVE-2020-29652\nhttps://access.redhat.com/security/cve/CVE-2021-2007\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYDZ+bNzjgjWX9erEAQghXg//awGwjQxJ5LEZWBTdgyuCa8mHEi2rop5T\nlmebolBMNRSbo9gI8LMSHlvIBBFiV4CuFvfxE0AVLNentfzOTH11TxNWe1KQYt4H\nEmcGHPeHWTxKDkvAHtVcWXy9WN3y5d4lHSaq6AR1nHRPcj/k1upyx22kotpnYxN8\n4d49PjFTO3YbmdYpNLVJ9nY8izqUpTfM7YSyj6ANZSlaYc5Z215o6TPo6e3wobf4\nmWu+VfDS0v+/AbGhQhO2sQ7r2ysJ85MB7c62cxck4a51KiA0NKd4xr0TAA4KHnNL\nISHFzi5QYXu+meE+9wYRo1ZjJ5fbPj41+1TJbR6O4CbP0xQiFpcUSipNju3rGSGy\nAe5G/QGT8J7HzOjlKVvY3SFu/odENR6c+xUIr7IB/FBlu7DdPF2XxMZDQD4DKHEk\n4aiDbuiEL3Yf78Ic1RqPPmrj9plIwprVFQz+k3JaQXKD+1dBxO6tk+nVu2/5xNbM\nuR03hrthYYIpdXLSWU4lzq8j3kQ9wZ4j/m2o6/K6eHNl9PyqAG5jfQv9bVf8E3oG\nkrzc/JLvOfHNEQ/oJs/v/DFDmnAxshCCtGWlpLJ5J0pcD3EePsrPNs1QtQurVrMv\nRjfBCWKOij53+BinrMKHdsHxfur7GCFCIQCVaLIv6GUjX2NWI0voIVA8JkrFNNp6\nMcvuEaxco7U=\n=sw8i\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nThis advisory provides the following updates among others:\n\n* Enhances profile parsing time. \n* Fixes excessive resource consumption from the Operator. \n* Fixes default content image. \n* Fixes outdated remediation handling. Bugs fixed (https://bugzilla.redhat.com/):\n\n1897635 - CVE-2020-28362 golang: math/big: panic during recursive division of very large numbers\n1918990 - ComplianceSuite scans use quay content image for initContainer\n1919135 - [OCP v46] The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object present\n1919846 - After remediation applied, the compliancecheckresults still reports Failed status for some rules\n1920999 - Compliance operator is not displayed when disconnected mode is selected in the OpenShift Web-Console. \n\nBug Fix(es):\n\n* Aggregator pod tries to parse ConfigMaps without results (BZ#1899479)\n\n* The compliancesuite object returns error with ocp4-cis tailored profile\n(BZ#1902251)\n\n* The compliancesuite does not trigger when there are multiple rhcos4\nprofiles added in scansettingbinding object (BZ#1902634)\n\n* [OCP v46] Not all remediations get applied through machineConfig although\nthe status of all rules shows Applied in ComplianceRemediations object\n(BZ#1907414)\n\n* The profile parser pod deployment and associated profiles should get\nremoved after upgrade the compliance operator (BZ#1908991)\n\n* Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error\n\"something else exists at that path\" (BZ#1909081)\n\n* [OCP v46] Always update the default profilebundles on Compliance operator\nstartup (BZ#1909122)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1899479 - Aggregator pod tries to parse ConfigMaps without results\n1902111 - CVE-2020-27813 golang-github-gorilla-websocket: integer overflow leads to denial of service\n1902251 - The compliancesuite object returns error with ocp4-cis tailored profile\n1902634 - The compliancesuite does not trigger when there are multiple rhcos4 profiles added in scansettingbinding object\n1907414 - [OCP v46] Not all remediations get applied through machineConfig although the status of all rules shows Applied in ComplianceRemediations object\n1908991 - The profile parser pod deployment and associated profiles should get removed after upgrade the compliance operator\n1909081 - Applying the \"rhcos4-moderate\" compliance profile leads to Ignition error \"something else exists at that path\"\n1909122 - [OCP v46] Always update the default profilebundles on Compliance operator startup\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - noarch, ppc64, s390x\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - noarch\n\n3. Description:\n\nWebKitGTK+ is port of the WebKit portable web rendering engine to the GTK+\nplatform. These packages provide WebKitGTK+ for GTK+ 3. \n\nThe following packages have been upgraded to a later upstream version:\nwebkitgtk4 (2.28.2). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 7.9 Release Notes linked from the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nx86_64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nppc64:\nwebkitgtk4-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64.rpm\n\nppc64le:\nwebkitgtk4-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.ppc64le.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64le.rpm\n\ns390x:\nwebkitgtk4-2.28.2-2.el7.s390.rpm\nwebkitgtk4-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.s390x.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nnoarch:\nwebkitgtk4-doc-2.28.2-2.el7.noarch.rpm\n\nppc64:\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-devel-2.28.2-2.el7.ppc64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.ppc64.rpm\n\ns390x:\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-devel-2.28.2-2.el7.s390x.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.s390x.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nwebkitgtk4-2.28.2-2.el7.src.rpm\n\nx86_64:\nwebkitgtk4-2.28.2-2.el7.i686.rpm\nwebkitgtk4-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.i686.rpm\nwebkitgtk4-debuginfo-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-devel-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-2.28.2-2.el7.x86_64.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.i686.rpm\nwebkitgtk4-jsc-devel-2.28.2-2.el7.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1207179 - Select items matching non existing pattern does not unselect already selected\n1566027 - can\u0027t correctly compute contents size if hidden files are included\n1569868 - Browsing samba shares using gvfs is very slow\n1652178 - [RFE] perf-tool run on wayland\n1656262 - The terminal\u0027s character display is unclear on rhel8 guest after installing gnome\n1668895 - [RHEL8] Timedlogin Fails when Userlist is Disabled\n1692536 - login screen shows after gnome-initial-setup\n1706008 - Sound Effect sometimes fails to change to selected option. \n1706076 - Automatic suspend for 90 minutes is set for 80 minutes instead. \n1715845 - JS ERROR: TypeError: this._workspacesViews[i] is undefined\n1719937 - GNOME Extension: Auto-Move-Windows Not Working Properly\n1758891 - tracker-devel subpackage missing from el8 repos\n1775345 - Rebase xdg-desktop-portal to 1.6\n1778579 - Nautilus does not respect umask settings. \n1779691 - Rebase xdg-desktop-portal-gtk to 1.6\n1794045 - There are two different high contrast versions of desktop icons\n1804719 - Update vte291 to 0.52.4\n1805929 - RHEL 8.1 gnome-shell-extension errors\n1811721 - CVE-2020-10018 webkitgtk: Use-after-free issue in accessibility/AXObjectCache.cpp\n1814820 - No checkbox to install updates in the shutdown dialog\n1816070 - \"search for an application to open this file\" dialog broken\n1816678 - CVE-2019-8846 webkitgtk: Use after free issue may lead to remote code execution\n1816684 - CVE-2019-8835 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1816686 - CVE-2019-8844 webkitgtk: Processing maliciously crafted web content may lead to arbitrary code execution\n1817143 - Rebase WebKitGTK to 2.28\n1820759 - Include IO stall fixes\n1820760 - Include IO fixes\n1824362 - [BZ] Setting in gnome-tweak-tool Window List will reset upon opening\n1827030 - gnome-settings-daemon: subscription notification on CentOS Stream\n1829369 - CVE-2020-11793 webkitgtk: use-after-free via crafted web content\n1832347 - [Rebase] Rebase pipewire to 0.3.x\n1833158 - gdm-related dconf folders and keyfiles are not found in fresh 8.2 install\n1837381 - Backport screen cast improvements to 8.3\n1837406 - Rebase gnome-remote-desktop to PipeWire 0.3 version\n1837413 - Backport changes needed by xdg-desktop-portal-gtk-1.6\n1837648 - Vendor.conf should point to https://access.redhat.com/site/solutions/537113\n1840080 - Can not control top bar menus via keys in Wayland\n1840788 - [flatpak][rhel8] unable to build potrace as dependency\n1843486 - Software crash after clicking Updates tab\n1844578 - anaconda very rarely crashes at startup with a pygobject traceback\n1846191 - usb adapters hotplug crashes gnome-shell\n1847051 - JS ERROR: TypeError: area is null\n1847061 - File search doesn\u0027t work under certain locales\n1847062 - gnome-remote-desktop crash on QXL graphics\n1847203 - gnome-shell: get_top_visible_window_actor(): gnome-shell killed by SIGSEGV\n1853477 - CVE-2020-15503 LibRaw: lack of thumbnail size range check can lead to buffer overflow\n1854734 - PipeWire 0.2 should be required by xdg-desktop-portal\n1866332 - Remove obsolete libusb-devel dependency\n1868260 - [Hyper-V][RHEL8] VM starts GUI failed on Hyper-V 2019/2016, hangs at \"Started GNOME Display Manager\" - GDM regression issue",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      }
    ],
    "trust": 2.52
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-3894",
        "trust": 3.4
      },
      {
        "db": "PACKETSTORM",
        "id": "157378",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96545608",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.4513",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1627",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0864",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0584",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0099",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3399",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0234",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3893",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0691",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "158068",
        "trust": 0.6
      },
      {
        "db": "NSFOCUS",
        "id": "49320",
        "trust": 0.6
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2020/04/27/3",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-182019",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3894",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168011",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "160624",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161546",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161429",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161016",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159375",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "159816",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "id": "VAR-202004-1972",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:23:38.165000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT211107",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211107"
      },
      {
        "title": "HT211101",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211101"
      },
      {
        "title": "HT211102",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211102"
      },
      {
        "title": "HT211104",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211104"
      },
      {
        "title": "HT211105",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211105"
      },
      {
        "title": "HT211106",
        "trust": 0.8,
        "url": "https://support.apple.com/en-us/HT211106"
      },
      {
        "title": "HT211101",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211101"
      },
      {
        "title": "HT211102",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211102"
      },
      {
        "title": "HT211104",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211104"
      },
      {
        "title": "HT211105",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211105"
      },
      {
        "title": "HT211106",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211106"
      },
      {
        "title": "HT211107",
        "trust": 0.8,
        "url": "https://support.apple.com/ja-jp/HT211107"
      },
      {
        "title": "Multiple Apple product WebKit Repair measures for component race condition problem vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=112974"
      },
      {
        "title": "The Register",
        "trust": 0.2,
        "url": "https://www.theregister.co.uk/2020/03/25/apple_patch_update/"
      },
      {
        "title": "Debian Security Advisories: DSA-4681-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=f4d1e9f2c79b1bc667cc4ee30e67d845"
      },
      {
        "title": "Red Hat: Moderate: GNOME security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204451 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Quay v3.3.3 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210050 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210190 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: webkitgtk4 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20204035 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20210436 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Service Telemetry Framework 1.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225924 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20205605 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1563",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1563"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-362",
        "trust": 1.9
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211101"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211102"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211104"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211105"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211106"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/ht211107"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3894"
      },
      {
        "trust": 1.3,
        "url": "https://packetstormsecurity.com/files/157378/webkit-audioarray-allocate-data-race-out-of-bounds-access.html"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.8,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.8,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2020-3894"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96545608/"
      },
      {
        "trust": 0.7,
        "url": "https://www.debian.org/security/2020/dsa-4681"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.6,
        "url": "http://www.openwall.com/lists/oss-security/2020/04/27/3"
      },
      {
        "trust": 0.6,
        "url": "https://security.gentoo.org/glsa/202006-08"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158068/gentoo-linux-security-advisory-202006-08.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-wpe-webkit-eight-vulnerabilities-32113"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0864"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht211107"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0691"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.4513/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0099/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0234/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0584"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1627/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3399/"
      },
      {
        "trust": 0.6,
        "url": "http://www.nsfocus.net/vulndb/49320"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3893/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.5,
        "url": "https://www.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20907"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20388"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-15165"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14382"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11068"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19221"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-18197"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-7595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16168"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8177"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-9327"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-16935"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20916"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5018"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19956"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-14422"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-20387"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-1752"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-8492"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-6405"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-10029"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-11068"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13631"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2019-1551"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-1971"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-24659"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-15166"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14469"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-10105"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14461"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14466"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14882"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16227"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14464"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16230"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14468"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14467"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14880"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14881"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16300"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14462"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16229"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-10103"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14463"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-16451"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14879"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14019"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14470"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-14465"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-16452"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20386"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27813"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18197"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20907"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5018"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19956"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-1551"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20387"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20916"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/updating/updating-cluster"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19221"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-rel"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15165"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16935"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20388"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-16168"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20218"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/362.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2020-1563.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_openshift_container_s"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5605"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1885700]"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25684"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24490"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-2007"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19072"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9458"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13249"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18809"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20054"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12826"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25211"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19602"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25661"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25641"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29652"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15917"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://\u0027"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6978"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0444"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16233"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-14553"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15999"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25682"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17546"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3884"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19062"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19046"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19447"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25685"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14381"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12770"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19767"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19533"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-2922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25687"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-16167"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9455"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-12614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25681"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19063"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19319"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8563"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:5634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0436"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:0190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8768"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8611"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8676"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8707"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8658"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.9_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8719"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8688"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8765"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8601"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8821"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8686"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8671"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8763"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8544"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8679"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8674"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8622"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8678"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8669"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8608"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8666"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8571"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8689"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8551"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8596"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8610"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-11070"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8644"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6237"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8607"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8563"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8536"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8680"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-6251"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8609"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8683"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8506"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8649"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8583"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8597"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.3_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2020:4451"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8816"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/site/solutions/537113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8846"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-04-01T00:00:00",
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "date": "2020-04-01T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "date": "2022-08-09T14:36:05",
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "date": "2020-12-18T19:14:41",
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "date": "2021-02-25T15:29:25",
        "db": "PACKETSTORM",
        "id": "161546"
      },
      {
        "date": "2021-02-16T15:44:48",
        "db": "PACKETSTORM",
        "id": "161429"
      },
      {
        "date": "2021-01-19T14:45:45",
        "db": "PACKETSTORM",
        "id": "161016"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2020-09-30T15:47:21",
        "db": "PACKETSTORM",
        "id": "159375"
      },
      {
        "date": "2020-11-04T15:24:00",
        "db": "PACKETSTORM",
        "id": "159816"
      },
      {
        "date": "2020-03-25T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "date": "2020-04-22T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "date": "2020-04-01T18:15:16.270000",
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-06-02T00:00:00",
        "db": "VULHUB",
        "id": "VHN-182019"
      },
      {
        "date": "2022-06-02T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-3894"
      },
      {
        "date": "2022-03-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      },
      {
        "date": "2020-04-22T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      },
      {
        "date": "2024-11-21T05:31:54.757000",
        "db": "NVD",
        "id": "CVE-2020-3894"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168011"
      },
      {
        "db": "PACKETSTORM",
        "id": "160624"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      }
    ],
    "trust": 0.8
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "plural  Apple Race condition vulnerabilities in the product",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-003664"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "competition condition problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202003-1562"
      }
    ],
    "trust": 0.6
  }
}

VAR-202108-2123

Vulnerability from variot - Updated: 2025-12-22 22:23

A memory corruption vulnerability was addressed with improved locking. This issue is fixed in Safari 15, tvOS 15, watchOS 8, iOS 15 and iPadOS 15. Processing maliciously crafted web content may lead to code execution. apple's Safari Products from multiple vendors, such as the following, contain out-of-bounds write vulnerabilities.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. A security issue has been found in WebKitGTK and WPE WebKit prior to 2.34.0. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: webkit2gtk3 security, bug fix, and enhancement update Advisory ID: RHSA-2022:1777-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1777 Issue date: 2022-05-10 CVE Names: CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 =====================================================================

  1. Summary:

An update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64

  1. Description:

WebKitGTK is the port of the portable web rendering engine WebKit to the GTK platform.

The following packages have been upgraded to a later upstream version: webkit2gtk3 (2.34.6).

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: webkit2gtk3-2.34.6-1.el8.src.rpm

aarch64: webkit2gtk3-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm

ppc64le: webkit2gtk3-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm

s390x: webkit2gtk3-2.34.6-1.el8.s390x.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm

x86_64: webkit2gtk3-2.34.6-1.el8.i686.rpm webkit2gtk3-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm webkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm webkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-30809 https://access.redhat.com/security/cve/CVE-2021-30818 https://access.redhat.com/security/cve/CVE-2021-30823 https://access.redhat.com/security/cve/CVE-2021-30836 https://access.redhat.com/security/cve/CVE-2021-30846 https://access.redhat.com/security/cve/CVE-2021-30848 https://access.redhat.com/security/cve/CVE-2021-30849 https://access.redhat.com/security/cve/CVE-2021-30851 https://access.redhat.com/security/cve/CVE-2021-30884 https://access.redhat.com/security/cve/CVE-2021-30887 https://access.redhat.com/security/cve/CVE-2021-30888 https://access.redhat.com/security/cve/CVE-2021-30889 https://access.redhat.com/security/cve/CVE-2021-30890 https://access.redhat.com/security/cve/CVE-2021-30897 https://access.redhat.com/security/cve/CVE-2021-30934 https://access.redhat.com/security/cve/CVE-2021-30936 https://access.redhat.com/security/cve/CVE-2021-30951 https://access.redhat.com/security/cve/CVE-2021-30952 https://access.redhat.com/security/cve/CVE-2021-30953 https://access.redhat.com/security/cve/CVE-2021-30954 https://access.redhat.com/security/cve/CVE-2021-30984 https://access.redhat.com/security/cve/CVE-2021-45481 https://access.redhat.com/security/cve/CVE-2021-45482 https://access.redhat.com/security/cve/CVE-2021-45483 https://access.redhat.com/security/cve/CVE-2022-22589 https://access.redhat.com/security/cve/CVE-2022-22590 https://access.redhat.com/security/cve/CVE-2022-22592 https://access.redhat.com/security/cve/CVE-2022-22594 https://access.redhat.com/security/cve/CVE-2022-22620 https://access.redhat.com/security/cve/CVE-2022-22637 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15

iOS 15 and iPadOS 15 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212814.

Accessory Manager Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory consumption issue was addressed with improved memory handling. CVE-2021-30837: Siddharth Aeri (@b1n4r1b01)

AppleMobileFileIntegrity Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to read sensitive information Description: This issue was addressed with improved checks. CVE-2021-30811: an anonymous researcher working with Compartir

Apple Neural Engine Available for devices with Apple Neural Engine: iPhone 8 and later, iPad Pro (3rd generation) and later, iPad Air (3rd generation) and later, and iPad mini (5th generation) Impact: A malicious application may be able to execute arbitrary code with system privileges on devices with an Apple Neural Engine Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30838: proteas wang

CoreML Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to cause unexpected application termination or arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30825: hjy79425575 working with Trend Micro Zero Day Initiative

Face ID Available for devices with Face ID: iPhone X, iPhone XR, iPhone XS (all models), iPhone 11 (all models), iPhone 12 (all models), iPad Pro (11-inch), and iPad Pro (3rd generation) Impact: A 3D model constructed to look like the enrolled user may be able to authenticate via Face ID Description: This issue was addressed by improving Face ID anti- spoofing models. CVE-2021-30863: Wish Wu (吴潍浠 @wish_wu) of Ant-financial Light-Year Security Lab

FontParser Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted dfont file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab CVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab

ImageIO Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30835: Ye Zhang of Baidu Security CVE-2021-30847: Mike Zhang of Pangu Lab

Kernel Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A race condition was addressed with improved locking. CVE-2021-30857: Zweig of Kunlun Lab

libexpat Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed by updating expat to version 2.4.1. CVE-2013-0340: an anonymous researcher

Model I/O Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30819: Apple

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An application may be able to access restricted files Description: A validation issue existed in the handling of symlinks. CVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Preferences Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A sandboxed process may be able to circumvent sandbox restrictions Description: A logic issue was addressed with improved state management. CVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

Siri Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: A local attacker may be able to view contacts from the lock screen Description: A lock screen issue allowed access to contacts on a locked device. CVE-2021-30815: an anonymous researcher

Telephony Available for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad Air 2, iPad (5th generation), and iPad mini 4 Impact: In certain situations, the baseband would fail to enable integrity and ciphering protection Description: A logic issue was addressed with improved state management. CVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST SysSec Lab

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30846: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved memory handling. CVE-2021-30848: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30849: Sergei Glazunov of Google Project Zero

WebKit Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption vulnerability was addressed with improved locking. CVE-2021-30851: Samuel Groß of Google Project Zero

Wi-Fi Available for: iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation) Impact: An attacker in physical proximity may be able to force a user onto a malicious Wi-Fi network during device setup Description: An authorization issue was addressed with improved state management. CVE-2021-30810: an anonymous researcher

Additional recognition

Assets We would like to acknowledge Cees Elzinga for their assistance.

Bluetooth We would like to acknowledge an anonymous researcher for their assistance.

File System We would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their assistance.

Sandbox We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.

UIKit We would like to acknowledge an anonymous researcher for their assistance.

Installation note:

This update is available through iTunes and Software Update on your iOS device, and will not appear in your computer's Software Update application, or in the Apple Downloads site. Make sure you have an Internet connection and have installed the latest version of iTunes from https://www.apple.com/itunes/

iTunes and Software Update on the device will automatically check Apple's update server on its weekly schedule. When an update is detected, it is downloaded and the option to be installed is presented to the user when the iOS device is docked. We recommend applying the update immediately if possible. Selecting Don't Install will present the option the next time you connect your iOS device. The automatic update process may take up to a week depending on the day that iTunes or the device checks for updates. You may manually obtain the update via the Check for Updates button within iTunes, or the Software Update on your device.

To check that the iPhone, iPod touch, or iPad has been updated: * Navigate to Settings * Select General * Select About * The version after applying this update will be "15.0"

Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmFI88sACgkQeC9qKD1p rhjVNxAAx5mEiw7NE47t6rO8Y93xPrNSk+7xcqPeBHUCg+rLmqxtOhITRVUw0QLH sBIwSVK1RxxRjasBfgwbh1jyMYg6EzfKNQOeZGnjLA4WRmIj728aGehJ56Z/ggpJ awJNPvfzyoSdFgdwntj2FDih/d4oJmMwIyhNvqGrSwuybq9sznxJsTbpRbKWFhiz y/phwVywo0A//XBiup3hrscl5SmxO44WBDlL+MAv4FnRPk7IGbBCfO8HnhO/9j1n uSNoKtnJt2f6/06E0wMLStBSESKPSmKSz65AtR6h1oNytEuJtyHME8zOtz/Z9hlJ fo5f6tN6uCgYgYs7dyonWk5qiL7cue3ow58dPjZuNmqxFeu5bCc/1G47LDcdwoAg o0aU22MEs4wKdBWrtFI40BQ5LEmFrh3NM82GwFYDBz92x7B9SpXMau6F82pCvvyG wNbRL5PzHUMa1/LfVcoOyWk0gZBeD8/GFzNmwBeZmBMlpoPRo5bkyjUn8/ABG+Ie 665EHgGYs5BIV20Gviax0g4bjGHqEndfxdjyDHzJ87d65iA5uYdOiw61WpHq9kwH hK4e6BF0CahTrpz6UGVuOA/VMMDv3ZqhW+a95KBzDa7dHVnokpV+WiMJcmoEJ5gI 5ovpqig9qjf7zMv3+3Mlij0anQ24w7+MjEBk7WDG+2al83GCMAE= =nkJd -----END PGP SIGNATURE-----

. CVE-2021-30811: an anonymous researcher working with Compartir

bootp Available for: Apple Watch Series 3 and later Impact: A device may be passively tracked by its WiFi MAC address Description: A user privacy issue was addressed by removing the broadcast MAC address. CVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security Entry added October 25, 2021

WebKit Available for: Apple Watch Series 3 and later Impact: Visiting a maliciously crafted website may reveal a user's browsing history Description: The issue was resolved with additional restrictions on CSS compositing. CVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research Entry added October 25, 2021

WebKit Available for: Apple Watch Series 3 and later Impact: An attacker in a privileged network position may be able to bypass HSTS Description: A logic issue was addressed with improved restrictions.

Alternatively, on your watch, select "My Watch > General > About". CVE-2021-30851: Samuel Groß of Google Project Zero

Installation note:

This update may be obtained from the Mac App Store

Show details on source website

{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202108-2123",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "tvos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "watchos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "8.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "15.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "macos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.1"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.1"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "33"
      },
      {
        "model": "ipados",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "macos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "watchos",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": "8.0"
      },
      {
        "model": "ios",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "safari",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "tvos",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30a2\u30c3\u30d7\u30eb",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Apple",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164241"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2021-30851",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-30851",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-390584",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2021-30851",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2021-30851",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-30851",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-30851",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202108-1949",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-390584",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-30851",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A memory corruption vulnerability was addressed with improved locking. This issue is fixed in Safari 15, tvOS 15, watchOS 8, iOS 15 and iPadOS 15. Processing maliciously crafted web content may lead to code execution. apple\u0027s Safari Products from multiple vendors, such as the following, contain out-of-bounds write vulnerabilities.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. A security issue has been found in WebKitGTK and WPE WebKit prior to 2.34.0. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: webkit2gtk3 security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2022:1777-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1777\nIssue date:        2022-05-10\nCVE Names:         CVE-2021-30809 CVE-2021-30818 CVE-2021-30823 \n                   CVE-2021-30836 CVE-2021-30846 CVE-2021-30848 \n                   CVE-2021-30849 CVE-2021-30851 CVE-2021-30884 \n                   CVE-2021-30887 CVE-2021-30888 CVE-2021-30889 \n                   CVE-2021-30890 CVE-2021-30897 CVE-2021-30934 \n                   CVE-2021-30936 CVE-2021-30951 CVE-2021-30952 \n                   CVE-2021-30953 CVE-2021-30954 CVE-2021-30984 \n                   CVE-2021-45481 CVE-2021-45482 CVE-2021-45483 \n                   CVE-2022-22589 CVE-2022-22590 CVE-2022-22592 \n                   CVE-2022-22594 CVE-2022-22620 CVE-2022-22637 \n=====================================================================\n\n1. Summary:\n\nAn update for webkit2gtk3 is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. Description:\n\nWebKitGTK is the port of the portable web rendering engine WebKit to the\nGTK platform. \n\nThe following packages have been upgraded to a later upstream version:\nwebkit2gtk3 (2.34.6). \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nwebkit2gtk3-2.34.6-1.el8.src.rpm\n\naarch64:\nwebkit2gtk3-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.aarch64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.aarch64.rpm\n\nppc64le:\nwebkit2gtk3-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.ppc64le.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.ppc64le.rpm\n\ns390x:\nwebkit2gtk3-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.s390x.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.s390x.rpm\n\nx86_64:\nwebkit2gtk3-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-debugsource-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-debuginfo-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-2.34.6-1.el8.x86_64.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.i686.rpm\nwebkit2gtk3-jsc-devel-debuginfo-2.34.6-1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-30809\nhttps://access.redhat.com/security/cve/CVE-2021-30818\nhttps://access.redhat.com/security/cve/CVE-2021-30823\nhttps://access.redhat.com/security/cve/CVE-2021-30836\nhttps://access.redhat.com/security/cve/CVE-2021-30846\nhttps://access.redhat.com/security/cve/CVE-2021-30848\nhttps://access.redhat.com/security/cve/CVE-2021-30849\nhttps://access.redhat.com/security/cve/CVE-2021-30851\nhttps://access.redhat.com/security/cve/CVE-2021-30884\nhttps://access.redhat.com/security/cve/CVE-2021-30887\nhttps://access.redhat.com/security/cve/CVE-2021-30888\nhttps://access.redhat.com/security/cve/CVE-2021-30889\nhttps://access.redhat.com/security/cve/CVE-2021-30890\nhttps://access.redhat.com/security/cve/CVE-2021-30897\nhttps://access.redhat.com/security/cve/CVE-2021-30934\nhttps://access.redhat.com/security/cve/CVE-2021-30936\nhttps://access.redhat.com/security/cve/CVE-2021-30951\nhttps://access.redhat.com/security/cve/CVE-2021-30952\nhttps://access.redhat.com/security/cve/CVE-2021-30953\nhttps://access.redhat.com/security/cve/CVE-2021-30954\nhttps://access.redhat.com/security/cve/CVE-2021-30984\nhttps://access.redhat.com/security/cve/CVE-2021-45481\nhttps://access.redhat.com/security/cve/CVE-2021-45482\nhttps://access.redhat.com/security/cve/CVE-2021-45483\nhttps://access.redhat.com/security/cve/CVE-2022-22589\nhttps://access.redhat.com/security/cve/CVE-2022-22590\nhttps://access.redhat.com/security/cve/CVE-2022-22592\nhttps://access.redhat.com/security/cve/CVE-2022-22594\nhttps://access.redhat.com/security/cve/CVE-2022-22620\nhttps://access.redhat.com/security/cve/CVE-2022-22637\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-09-20-1 iOS 15 and iPadOS 15\n\niOS 15 and iPadOS 15 addresses the following issues. Information\nabout the security content is also available at\nhttps://support.apple.com/HT212814. \n\nAccessory Manager\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2021-30837: Siddharth Aeri (@b1n4r1b01)\n\nAppleMobileFileIntegrity\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to read sensitive information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30811: an anonymous researcher working with Compartir\n\nApple Neural Engine\nAvailable for devices with Apple Neural Engine: iPhone 8 and later,\niPad Pro (3rd generation) and later, iPad Air (3rd generation) and\nlater, and iPad mini (5th generation) \nImpact: A malicious application may be able to execute arbitrary code\nwith system privileges on devices with an Apple Neural Engine\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30838: proteas wang\n\nCoreML\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30825: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nFace ID\nAvailable for devices with Face ID: iPhone X, iPhone XR, iPhone XS\n(all models), iPhone 11 (all models), iPhone 12 (all models), iPad\nPro (11-inch), and iPad Pro (3rd generation)\nImpact: A 3D model constructed to look like the enrolled user may be\nable to authenticate via Face ID\nDescription: This issue was addressed by improving Face ID anti-\nspoofing models. \nCVE-2021-30863: Wish Wu (\u5434\u6f4d\u6d60 @wish_wu) of Ant-financial Light-Year\nSecurity Lab\n\nFontParser\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted dfont file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30841: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30842: Xingwei Lin of Ant Security Light-Year Lab\nCVE-2021-30843: Xingwei Lin of Ant Security Light-Year Lab\n\nImageIO\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30835: Ye Zhang of Baidu Security\nCVE-2021-30847: Mike Zhang of Pangu Lab\n\nKernel\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A race condition was addressed with improved locking. \nCVE-2021-30857: Zweig of Kunlun Lab\n\nlibexpat\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed by updating expat to version\n2.4.1. \nCVE-2013-0340: an anonymous researcher\n\nModel I/O\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30819: Apple\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An application may be able to access restricted files\nDescription: A validation issue existed in the handling of symlinks. \nCVE-2021-30855: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nPreferences\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A sandboxed process may be able to circumvent sandbox\nrestrictions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30854: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nSiri\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: A local attacker may be able to view contacts from the lock\nscreen\nDescription: A lock screen issue allowed access to contacts on a\nlocked device. \nCVE-2021-30815: an anonymous researcher\n\nTelephony\nAvailable for: iPhone SE (1st generation), iPad Pro 12.9-inch, iPad\nAir 2, iPad (5th generation), and iPad mini 4\nImpact: In certain situations, the baseband would fail to enable\nintegrity and ciphering protection\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30826: CheolJun Park, Sangwook Bae and BeomSeok Oh of KAIST\nSysSec Lab\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30846: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nmemory handling. \nCVE-2021-30848: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30849: Sergei Glazunov of Google Project Zero\n\nWebKit\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption vulnerability was addressed with\nimproved locking. \nCVE-2021-30851: Samuel Gro\u00df of Google Project Zero\n\nWi-Fi\nAvailable for: iPhone 6s and later, iPad Pro (all models), iPad Air 2\nand later, iPad 5th generation and later, iPad mini 4 and later, and\niPod touch (7th generation)\nImpact: An attacker in physical proximity may be able to force a user\nonto a malicious Wi-Fi network during device setup\nDescription: An authorization issue was addressed with improved state\nmanagement. \nCVE-2021-30810: an anonymous researcher\n\nAdditional recognition\n\nAssets\nWe would like to acknowledge Cees Elzinga for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nFile System\nWe would like to acknowledge Siddharth Aeri (@b1n4r1b01) for their\nassistance. \n\nSandbox\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nInstallation note:\n\nThis update is available through iTunes and Software Update on your\niOS device, and will not appear in your computer\u0027s Software Update\napplication, or in the Apple Downloads site. Make sure you have an\nInternet connection and have installed the latest version of iTunes\nfrom https://www.apple.com/itunes/\n\niTunes and Software Update on the device will automatically check\nApple\u0027s update server on its weekly schedule. When an update is\ndetected, it is downloaded and the option to be installed is\npresented to the user when the iOS device is docked. We recommend\napplying the update immediately if possible. Selecting Don\u0027t Install\nwill present the option the next time you connect your iOS device. \nThe automatic update process may take up to a week depending on the\nday that iTunes or the device checks for updates. You may manually\nobtain the update via the Check for Updates button within iTunes, or\nthe Software Update on your device. \n\nTo check that the iPhone, iPod touch, or iPad has been updated:\n* Navigate to Settings\n* Select General\n* Select About\n* The version after applying this update will be \"15.0\"\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmFI88sACgkQeC9qKD1p\nrhjVNxAAx5mEiw7NE47t6rO8Y93xPrNSk+7xcqPeBHUCg+rLmqxtOhITRVUw0QLH\nsBIwSVK1RxxRjasBfgwbh1jyMYg6EzfKNQOeZGnjLA4WRmIj728aGehJ56Z/ggpJ\nawJNPvfzyoSdFgdwntj2FDih/d4oJmMwIyhNvqGrSwuybq9sznxJsTbpRbKWFhiz\ny/phwVywo0A//XBiup3hrscl5SmxO44WBDlL+MAv4FnRPk7IGbBCfO8HnhO/9j1n\nuSNoKtnJt2f6/06E0wMLStBSESKPSmKSz65AtR6h1oNytEuJtyHME8zOtz/Z9hlJ\nfo5f6tN6uCgYgYs7dyonWk5qiL7cue3ow58dPjZuNmqxFeu5bCc/1G47LDcdwoAg\no0aU22MEs4wKdBWrtFI40BQ5LEmFrh3NM82GwFYDBz92x7B9SpXMau6F82pCvvyG\nwNbRL5PzHUMa1/LfVcoOyWk0gZBeD8/GFzNmwBeZmBMlpoPRo5bkyjUn8/ABG+Ie\n665EHgGYs5BIV20Gviax0g4bjGHqEndfxdjyDHzJ87d65iA5uYdOiw61WpHq9kwH\nhK4e6BF0CahTrpz6UGVuOA/VMMDv3ZqhW+a95KBzDa7dHVnokpV+WiMJcmoEJ5gI\n5ovpqig9qjf7zMv3+3Mlij0anQ24w7+MjEBk7WDG+2al83GCMAE=\n=nkJd\n-----END PGP SIGNATURE-----\n\n\n\n. \nCVE-2021-30811: an anonymous researcher working with Compartir\n\nbootp\nAvailable for: Apple Watch Series 3 and later\nImpact: A device may be passively tracked by its WiFi MAC address\nDescription: A user privacy issue was addressed by removing the\nbroadcast MAC address. \nCVE-2021-30808: Csaba Fitzl (@theevilbit) of Offensive Security\nEntry added October 25, 2021\n\nWebKit\nAvailable for: Apple Watch Series 3 and later\nImpact: Visiting a maliciously crafted website may reveal a user\u0027s\nbrowsing history\nDescription: The issue was resolved with additional restrictions on\nCSS compositing. \nCVE-2021-30818: Amar Menezes (@amarekano) of Zon8Research\nEntry added October 25, 2021\n\nWebKit\nAvailable for: Apple Watch Series 3 and later\nImpact: An attacker in a privileged network position may be able to\nbypass HSTS\nDescription: A logic issue was addressed with improved restrictions. \n\nAlternatively, on your watch, select \"My Watch \u003e General \u003e About\". \nCVE-2021-30851: Samuel Gro\u00df of Google Project Zero\n\nInstallation note:\n\nThis update may be obtained from the Mac App Store",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164241"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-30851",
        "trust": 4.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/10/31/1",
        "trust": 2.6
      },
      {
        "db": "PACKETSTORM",
        "id": "167037",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164692",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164234",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4084",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3159.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3631",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3641",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0382",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3578",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3996",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021092024",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022051140",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021110113",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164688",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164693",
        "trust": 0.2
      },
      {
        "db": "VULHUB",
        "id": "VHN-390584",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30851",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164233",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164241",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164241"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "id": "VAR-202108-2123",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2025-12-22T22:23:33.196000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "HT212819 Apple\u00a0 Security update",
        "trust": 0.8,
        "url": "https://www.debian.org/security/2021/dsa-4995"
      },
      {
        "title": "Apple tvOS Buffer error vulnerability fix",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=167643"
      },
      {
        "title": "Red Hat: CVE-2021-30851",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-30851"
      },
      {
        "title": "Debian Security Advisories: DSA-4996-1 wpewebkit -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=f71426611b6d65dd2fc20f96e1b5aa21"
      },
      {
        "title": "Debian Security Advisories: DSA-4995-1 webkit2gtk -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=54e170d5f64268e52cbdddfae4a29a4d"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-30851 log"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202110-9] webkit2gtk: multiple issues",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202110-9"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202110-10] wpewebkit: multiple issues",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202110-10"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-015",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-015"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2088",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2088"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/RUB-SysSec/JIT-Picker "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.1
      },
      {
        "problemtype": "Out-of-bounds writing (CWE-787) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.6,
        "url": "http://www.openwall.com/lists/oss-security/2021/10/31/1"
      },
      {
        "trust": 2.4,
        "url": "https://support.apple.com/en-us/ht212815"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/kb/ht212869"
      },
      {
        "trust": 1.8,
        "url": "https://www.debian.org/security/2021/dsa-4995"
      },
      {
        "trust": 1.8,
        "url": "https://www.debian.org/security/2021/dsa-4996"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht212814"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht212816"
      },
      {
        "trust": 1.8,
        "url": "https://support.apple.com/en-us/ht212819"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30851"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/h6mgxcx7p5ahwoq6irt477ukt7is4dad/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/on5sdvvpvpcagfpw2ghyatzvzylpw2l4/"
      },
      {
        "trust": 0.8,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/on5sdvvpvpcagfpw2ghyatzvzylpw2l4/"
      },
      {
        "trust": 0.8,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/h6mgxcx7p5ahwoq6irt477ukt7is4dad/"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2021-30851"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30846"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30849"
      },
      {
        "trust": 0.6,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021110113"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3159.2"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0382"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021092024"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/webkitgtk-wpe-webkit-multiple-vulnerabilities-36750"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/167037/red-hat-security-advisory-2022-1777-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3996"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3578"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022051140"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3631"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3641"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4084"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164234/apple-security-advisory-2021-09-20-2.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164692/apple-security-advisory-2021-10-26-10.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/apple-ios-macos-multiple-vulnerabilities-36461"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30848"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30809"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30823"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30818"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30841"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30835"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30810"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30857"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30843"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30847"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30842"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2013-0340"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30837"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30854"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30836"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30855"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30811"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30884"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30852"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30808"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30834"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30831"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30866"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30814"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30840"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/kb/ht204641"
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212819."
      },
      {
        "trust": 0.2,
        "url": "https://support.apple.com/ht212816."
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://security.archlinux.org/cve-2021-30851"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22592"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22637"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30809"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30846"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30888"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22620"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30952"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30897"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30936"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22594"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30887"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45483"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30849"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30836"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22589"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30953"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30954"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22590"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30890"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30815"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212814."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30863"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30838"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/itunes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30819"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212815."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30850"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164241"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "db": "PACKETSTORM",
        "id": "164241"
      },
      {
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-08-24T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "date": "2021-08-24T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "date": "2022-05-11T15:50:41",
        "db": "PACKETSTORM",
        "id": "167037"
      },
      {
        "date": "2021-09-22T16:22:10",
        "db": "PACKETSTORM",
        "id": "164233"
      },
      {
        "date": "2021-10-28T14:58:57",
        "db": "PACKETSTORM",
        "id": "164693"
      },
      {
        "date": "2021-10-28T14:58:43",
        "db": "PACKETSTORM",
        "id": "164692"
      },
      {
        "date": "2021-10-28T14:55:07",
        "db": "PACKETSTORM",
        "id": "164688"
      },
      {
        "date": "2021-09-22T16:29:53",
        "db": "PACKETSTORM",
        "id": "164241"
      },
      {
        "date": "2021-09-22T16:22:32",
        "db": "PACKETSTORM",
        "id": "164234"
      },
      {
        "date": "2021-08-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "date": "2024-07-19T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "date": "2021-08-24T19:15:13.373000",
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-390584"
      },
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-30851"
      },
      {
        "date": "2022-05-12T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      },
      {
        "date": "2024-07-19T07:14:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      },
      {
        "date": "2023-11-07T03:33:31.553000",
        "db": "NVD",
        "id": "CVE-2021-30851"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "apple\u0027s \u00a0Safari\u00a0 Out-of-bounds write vulnerability in products from multiple vendors such as",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-021226"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "buffer error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-1949"
      }
    ],
    "trust": 0.6
  }
}