VAR-202205-2059
Vulnerability from variot - Updated: 2025-12-22 22:49Out-of-bounds Write in GitHub repository vim/vim prior to 8.2. Vim is a cross-platform text editor. There is a security vulnerability in versions prior to Vim 8.2, which is caused by an out-of-bounds write problem. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: RHACS 3.72 enhancement and security update Advisory ID: RHSA-2022:6714-01 Product: RHACS Advisory URL: https://access.redhat.com/errata/RHSA-2022:6714 Issue date: 2022-09-26 CVE Names: CVE-2015-20107 CVE-2022-0391 CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-24675 CVE-2022-24921 CVE-2022-28327 CVE-2022-29154 CVE-2022-29526 CVE-2022-30631 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 =====================================================================
- Summary:
Updated images are now available for Red Hat Advanced Cluster Security for Kubernetes (RHACS). The updated image includes new features and bug fixes.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Release of RHACS 3.72 provides these changes:
New features * Automatic removal of nonactive clusters from RHACS: RHACS provides the ability to configure your system to automatically remove nonactive clusters from RHACS so that you can monitor active clusters only. * Support for unauthenticated email integration: RHACS now supports unauthenticated SMTP for email integrations. This is insecure and not recommended. * Support for Quay robot accounts: RHACS now supports use of robot accounts in quay.io integrations. You can create robot accounts in Quay that allow you to share credentials for use in multiple repositories. * Ability to view Dockerfile lines in images that introduced components with Common Vulnerabilities and Exposures (CVEs): In the Images view, under Image Findings, you can view individual lines in the Dockerfile that introduced the components that have been identified as containing CVEs. * Network graph improvements: RHACS 3.72 includes some improvements to the Network Graph user interface.
Known issue * RHACS shows the wrong severity when two severities exist for a single vulnerability in a single distribution. This issue occurs because RHACS scopes severities by namespace rather than component. There is no workaround. It is anticipated that an upcoming release will include a fix for this issue. (ROX-12527)
Bug fixes * Before this update, the steps to configure OpenShift Container Platform OAuth for more than one URI were missing. The documentation has been revised to include instructions for configuring OAuth in OpenShift Container Platform to use more than one URI. For more information, see Creating additional routes for the OpenShift Container Platform OAuth server. (ROX-11296) * Before this update, the autogenerated image integration, such as a Docker registry integration, for a cluster is not deleted when the cluster is removed from Central. This issue is fixed. (ROX-9398) * Before this update, the Image OS policy criteria did not support regular expressions, or regex. However, the documentation indicated that regular expressions were supported. This issue is fixed by adding support for regular expressions for the Image OS policy criteria. (ROX-12301) * Before this update, the syslog integration did not respect a configured TCP proxy. This is now fixed. * Before this update, the scanner-db pod failed to start when a resource quota was set for the stackrox namespace, because the init-db container in the pod did not have any resources assigned to it. The init-db container for ScannerDB now specifies resource requests and limits that match the db container. (ROX-12291)
Notable technical changes * Scanning support for Red Hat Enterprise Linux 9: RHEL 9 is now generally available (GA). RHACS 3.72 introduces support for analyzing images built with Red Hat Universal Base Image (UBI) 9 and Red Hat Enterprise Linux (RHEL) 9 RPMs for vulnerabilities. * Policy for CVEs with fixable CVSS of 6 or greater disabled by default: Beginning with this release, the Fixable CVSS >= 6 and Privileged policy is no longer enabled by default for new RHACS installations. The configuration of this policy is not changed when upgrading an existing system. A new policy Privileged Containers with Important and Critical Fixable CVEs, which gives an alert for containers running in privileged mode that have important or critical fixable vulnerabilities, has been added.
Security Fix(es) * golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675) * golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921) * golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327) * golang: syscall: faccessat checks wrong group (CVE-2022-29526) * golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
To take advantage of the new features, bug fixes, and enhancements in RHACS 3.72 you are advised to upgrade to RHACS 3.72.0.
- Bugs fixed (https://bugzilla.redhat.com/):
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- JIRA issues fixed (https://issues.jboss.org/):
ROX-12799 - Release RHACS 3.72.0
- References:
https://access.redhat.com/security/cve/CVE-2015-20107 https://access.redhat.com/security/cve/CVE-2022-0391 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29154 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-30631 https://access.redhat.com/security/cve/CVE-2022-32206 https://access.redhat.com/security/cve/CVE-2022-32208 https://access.redhat.com/security/cve/CVE-2022-34903 https://access.redhat.com/security/updates/classification/#moderate https://docs.openshift.com/acs/3.72/release_notes/372-release-notes.html
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYzH0ItzjgjWX9erEAQg2Yg//fDLYNktH9vd06FrD5L77TeiYnD/Zx+f5 fk12roODKMOpcV6BmnOyPG0a6POCmhHn1Dn6bOT+7Awx0b9A9cXXDk6jytkpDhh7 O0OxzWZVVvSzNe1TL3WN9vwZqSpAYON8euLBEb16E8pmEv7vXKll3wMQIlctp6Nr ey6DLL718z8ghXbtkkcGsBQqElM4jESvGm5xByMymfRFktvy9LSgTi+Zc7FY7gXL AHitJZiSm57D/pwUHvNltLLkxQfVAGuJXaTHYFyeIi6Z2pdDySYAXcr60mVd6eSh 9/7qGwdsQARwmr174s0xMWRcns6UDvwIWifiXl6FUnTZFlia+lC3xIP1o2CXwoFP Fr7LpF0L9h5BapjSRv1w6qkkJIyJhw5v9VmZQoQ3joZqRQi0I6qLOcp92eik63pM i11ppoeDNwjpSST40Ema3j9PflzxXB7PKBUfKWwqNc2dnWDkiEhNaXOAZ7MqgdLo MB3enlKV4deeWOb5OA1Vlv/lAAJM0h5AOgTIBddYs3CDsyoK9fKm1UF/BEhcWMyr kV3AJ0/zzAK6ev4hQmP8Ug4SbdiHNdM3X1vgH54OVJ3Al3E1nAEyYmELNUITrvXV jJI5thbVwK78vOX9yWcmpZm879BnHnUPzGbS0lF5FVJOSZ8E7LvOE7lCM/dg094z 0riGwT9O9Ys= =hArw -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2115198 - build ceph containers for RHCS 5.2 release
- ========================================================================== Ubuntu Security Notice USN-5507-1 July 08, 2022
vim vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
Summary:
Several security issues were fixed in Vim.
Software Description: - vim: Vi IMproved - enhanced vi editor
Details:
It was discovered that Vim incorrectly handled memory access. An attacker could potentially use this issue to cause the program to crash, use unexpected values, or execute arbitrary code. (CVE-2022-1968)
It was discovered that Vim incorrectly handled memory access. An attacker could potentially use this issue to cause the corruption of sensitive information, a crash, or arbitrary code execution. (CVE-2022-1897, CVE-2022-1942)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: vim 2:7.4.1689-3ubuntu1.5+esm10
In general, a standard system update will make all the necessary changes. Summary:
An update is now available for RHOL-5.5-RHEL-8. Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1415 - Allow users to tune fluentd
LOG-1539 - Events and CLO csv are not collected after running oc adm must-gather --image=$downstream-clo-image
LOG-1713 - Reduce Permissions granted for prometheus-k8s service account
LOG-2063 - Collector pods fail to start when a Vector only Cluster Logging instance is created.
LOG-2134 - The infra logs are sent to app-xx indices
LOG-2159 - Cluster Logging Pods in CrashLoopBackOff
LOG-2165 - [Vector] Default log level debug makes it hard to find useful error/failure messages.
LOG-2167 - [Vector] Collector pods fails to start with configuration error when using Kafka SASL over SSL
LOG-2169 - [Vector] Logs not being sent to Kafka with SASL plaintext.
LOG-2172 - [vector]The openshift-apiserver and ovn audit logs can not be collected.
LOG-2242 - Log file metric exporter is still following /var/log/containers files.
LOG-2243 - grafana-dashboard-cluster-logging should be deleted once clusterlogging/instance was removed
LOG-2264 - Logging link should contain an icon
LOG-2274 - [Logging 5.5] EO doesn't recreate secrets kibana and kibana-proxy after removing them.
LOG-2276 - Fluent config format is hard to read via configmap
LOG-2290 - ClusterLogging Instance status in not getting updated in UI
LOG-2291 - [release-5.5] Events listing out of order in Kibana 6.8.1
LOG-2294 - [Vector] Vector internal metrics are not exposed via HTTPS due to which OpenShift Monitoring Prometheus service cannot scrape the metrics endpoint.
LOG-2300 - [Logging 5.5]ES pods can't be ready after removing secret/signing-elasticsearch
LOG-2303 - [Logging 5.5] Elasticsearch cluster upgrade stuck
LOG-2308 - configmap grafana-dashboard-elasticsearch is being created and deleted continously
LOG-2333 - Journal logs not reaching Elasticsearch output
LOG-2337 - [Vector] Missing @ prefix from the timestamp field in log record.
LOG-2342 - [Logging 5.5] Kibana pod can't connect to ES cluster after removing secret/signing-elasticsearch: "x509: certificate signed by unknown authority"
LOG-2384 - Provide a method to get authenticated from GCP
LOG-2411 - [Vector] Audit logs forwarding not working.
LOG-2412 - CLO's loki output url is parsed wrongly
LOG-2413 - PriorityClass cluster-logging is deleted if provide an invalid log type
LOG-2418 - EO supported time units don't match the units specified in CRDs.
LOG-2439 - Telemetry: the managedStatus&healthStatus&version values are wrong
LOG-2440 - [loki-operator] Live tail of logs does not work on OpenShift
LOG-2444 - The write index is removed when the size of the index > diskThresholdPercent% * total size.
LOG-2460 - [Vector] Collector pods fail to start on a FIPS enabled cluster.
LOG-2461 - [Vector] Vector auth config not generated when user provided bearer token is used in a secret for connecting to LokiStack.
LOG-2463 - Elasticsearch operator repeatedly prints error message when checking indices
LOG-2474 - EO shouldn't grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.5]
LOG-2522 - CLO supported time units don't match the units specified in CRDs.
LOG-2525 - The container's logs are not sent to separate index if the annotation is added after the pod is ready.
LOG-2546 - TLS handshake error on loki-gateway for FIPS cluster
LOG-2549 - [Vector] [master] Journald logs not sent to the Log store when using Vector as collector.
LOG-2554 - [Vector] [master] Fallback index is not used when structuredTypeKey is missing from JSON log data
LOG-2588 - FluentdQueueLengthIncreasing rule failing to be evaluated.
LOG-2596 - [vector]the condition in [transforms.route_container_logs] is inaccurate
LOG-2599 - Supported values for level field don't match documentation
LOG-2605 - $labels.instance is empty in the message when firing FluentdNodeDown alert
LOG-2609 - fluentd and vector are unable to ship logs to elasticsearch when cluster-wide proxy is in effect
LOG-2619 - containers violate PodSecurity -- Log Exporation
LOG-2627 - containers violate PodSecurity -- Loki
LOG-2649 - Level Critical should match the beginning of the line as the other levels
LOG-2656 - Logging uses deprecated v1beta1 apis
LOG-2664 - Deprecated Feature logs causing too much noise
LOG-2665 - [Logging 5.5] Sometimes collector fails to push logs to Elasticsearch cluster
LOG-2693 - Integration with Jaeger fails for ServiceMonitor
LOG-2700 - [Vector] vector container can't start due to "unknown field pod_annotation_fields" .
LOG-2703 - Collector DaemonSet is not removed when CLF is deleted for fluentd/vector only CL instance
LOG-2725 - Upgrade logging-eventrouter Golang version and tags
LOG-2731 - CLO keeps reporting Reconcile ServiceMonitor retry error and Reconcile Service retry error after creating clusterlogging.
LOG-2732 - Prometheus Operator pod throws 'skipping servicemonitor' error on Jaeger integration
LOG-2742 - unrecognized outputs when use the sts role secret
LOG-2746 - CloudWatch forwarding rejecting large log events, fills tmpfs
LOG-2749 - OpenShift Logging Dashboard for Elastic Shards shows "active_primary" instead of "active" shards.
LOG-2753 - Update Grafana configuration for LokiStack integration on grafana/loki repo
LOG-2763 - [Vector]{Master} Vector's healthcheck fails when forwarding logs to Lokistack.
LOG-2764 - ElasticSearch operator does not respect referencePolicy when selecting oauth-proxy image
LOG-2765 - ingester pod can not be started in IPv6 cluster
LOG-2766 - [vector] failed to parse cluster url: invalid authority IPv6 http-proxy
LOG-2772 - arn validation failed when role_arn=arn:aws-us-gov:xxx
LOG-2773 - No cluster-logging-operator-metrics service in logging 5.5
LOG-2778 - [Vector] [OCP 4.11] SA token not added to Vector config when connecting to LokiStack instance without CLF creds secret required by LokiStack.
LOG-2784 - Japanese log messages are garbled at Kibana
LOG-2793 - [Vector] OVN audit logs are missing the level field.
LOG-2864 - [vector] Can not sent logs to default when loki is the default output in CLF
LOG-2867 - [fluentd] All logs are sent to application tenant when loki is used as default logstore in CLF.
LOG-2873 - [Vector] Cannot configure CPU/Memory requests/limits when using Vector as collector.
LOG-2875 - Seeing a black rectangle box on the graph in Logs view
LOG-2876 - The link to the 'Container details' page on the 'Logs' screen throws error
LOG-2877 - When there is no query entered, seeing error message on the Logs view
LOG-2882 - RefreshIntervalDropdown and TimeRangeDropdown always set back to its original values when switching between pages in 'Logs' screen
- Description:
OpenShift sandboxed containers support for OpenShift Container Platform provides users with built-in support for running Kata containers as an additional, optional runtime. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:6102
Space precludes documenting all of the container images in this advisory.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.1-x86_64
The image digest is sha256:97410a5db655a9d3017b735c2c0747c849d09ff551765e49d5272b80c024a844
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.1-s390x
The image digest is sha256:13734de7e796e46f5403ef9ee918be88c12fdc9b73acb8777e0cc7c56a276794
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.1-ppc64le
The image digest is sha256:d0019b6b8b32cc9fea06562e6ce175086fa7de7b2b7dce171a8ac1a57f92f10b
(For aarch64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.1-aarch64
The image digest is sha256:3394a79e173ac17bc96a7256665701d3d7e2a95535a12f2ceb19ceb41dcd6b79
All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2033256 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_route_table.private_routes resource
2040715 - post 1.23 rebase: regression in service-load balancer reliability
2063622 - Failed to install the podman package from repo rhocp-4.10-for-rhel-8-x86_64-rpms
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
2102576 - [4.11] [Cluster storage Operator] DefaultStorageClassController report fake message "No default StorageClass for this platform" on azure and openstack
2103638 - No need to pass to-image-base for oc adm release new command when use --from-release
2103899 - [OVN] bonding fails after active-backup fail-over and reboot, kargs static IP
2104386 - OVS-Configure doesn't iterate connection names containing spaces correctly
2104435 - [dpu-network-operator] Updating images to be consistent with ART
2104510 - Update ose-machine-config-operator images to be consistent with ART
2104687 - MCP upgrades can stall waiting for master node reboots since MCC no longer gets drained
2105056 - Openshift-Ansible RHEL 8 CI update
2105444 - [OVN] Node to service traffic is blocked if service is "internalTrafficPolicy: Local" even backed pod is on the same node
2106772 - openshift4/ose-operator-registry image is vulnerable to multiple CVEs
2106795 - crio umask sometimes set to 0000
2107003 - The bash completion doesn't work for get subcommand
2107045 - OLM updates namespace labels even if they haven't changed
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
2107777 - Pipeline status filter and status colors doesn't work correctly with non-english languages
2107871 - Import: Advanced option sentence is splited into two parts and headlines has no padding
2108021 - Machine Controller stuck with Terminated Instances while Provisioning on AWS
2109052 - Add to application dropdown options are not visible on application-grouping sidebar action dropdown.
2109205 - HTTPS_PROXY ENV missing in some CSI driver operators
2109270 - Kube controllers crash when nodes are shut off in OpenStack
2109489 - Reply to arp requests on interfaces with no ip
2109709 - Namespace value is missing on the list when selecting "All namespaces" for operators
2109731 - alertmanager-main pods failing to start due to startupprobe timeout
2109866 - Cannot delete a Machine if a VM got stuck in ERROR
2109977 - storageclass should not be created for unsupported vsphere version
2110482 - [vsphere] failed to create cluster if datacenter is embedded in a Folder
2110723 - openshift-tests: allow -f to match tests for any test suite
2110737 - Master node in SchedulingDisabled after upgrade from 4.10.24 -> 4.11.0-rc.4
2111037 - Affinity rule created in console deployment for single-replica infrastructure
2111347 - dummy bug for 4.10.z bz2111335
2111471 - Node internal DNS address is not set for machine
2111475 - Fetch internal IPs of vms from dhcp server
2111587 - [4.11] Export OVS metrics
2111619 - Pods are unable to reach clusterIP services, ovn-controller isn't installing the group mod flows correctly
2111992 - OpenShift controller manager needs permissions to get/create/update leases for leader election
2112297 - bond-cni: Backport "mac duplicates" 4.11
2112353 - lifecycle.posStart hook does not have network connectivity.
2112908 - Search resource "virtualmachine" in "Home -> Search" crashes the console
2112912 - sum_irate doesn't work in OCP 4.8
2113926 - hypershift cluster deployment hang due to nil pointer dereference for hostedControlPlane.Spec.Etcd.Managed
2113938 - Fix e2e tests for [reboots][machine_config_labels] (tsc=nowatchdog)
2114574 - can not upgrade. Incorrect reading of olm.maxOpenShiftVersion
2114602 - Upgrade failing because restrictive scc is injected into version pod
2114964 - kola dhcp.propagation test failing
2115315 - README file for helm charts coded in Chinese shows messy characters when viewing in developer perspective.
2115435 - [4.11] INIT container stuck forever
2115564 - ClusterVersion availableUpdates is stale: PromQL conditional risks vs. slow/stuck Thanos
2115817 - Updates / config metrics are not available in 4.11
2116009 - Node Tuning Operator(NTO) - OCP upgrade failed due to node-tuning CO still progressing
2116557 - Order of config attributes are not maintained during conversion of PT4l from ptpconfig to ptp4l.0.config file
2117223 - kubernetes-nmstate-operator fails to install with error "no channel heads (entries not replaced by another entry) found in channel"
2117324 - catalog-operator fatal error: concurrent map writes
2117353 - kola dhcp.propagation test out of memory
2117370 - Migrate openshift-ansible to ansible-core
2117746 - Bump to latest k8s.io 1.24 release
2118214 - dummy bug for 4.10.z bz2118209
2118375 - pass the "--quiet" option via the buildconfig for s2i
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/
Security fixes:
-
CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
-
CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
-
CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
Bug fixes:
-
assisted-service repo pin-latest.py script should allow custom tags to be pinned (BZ# 2065661)
-
assisted-service-build image is too big in size (BZ# 2066059)
-
assisted-service pin-latest.py script should exclude the postgres image (BZ# 2076901)
-
PXE artifacts need to be served via HTTP (BZ# 2078531)
-
Implementing new service-agent protocol on agent side (BZ# 2081281)
-
RHACM 2.6.0 images (BZ# 2090906)
-
Assisted service POD keeps crashing after a bare metal host is created (BZ# 2093503)
-
Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled (BZ# 2096106)
-
Fix assisted CI jobs that fail for cluster-info readiness (BZ# 2097696)
-
Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB (BZ# 2099277)
-
The pre-selected search keyword is not readable (BZ# 2107736)
-
The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI (BZ# 2111843)
-
Bugs fixed (https://bugzilla.redhat.com/):
2065661 - assisted-service repo pin-latest.py script should allow custom tags to be pinned 2066059 - assisted-service-build image is too big in size 2076901 - assisted-service pin-latest.py script should exclude the postgres image 2078531 - iPXE artifacts need to be served via HTTP 2081281 - Implementing new service-agent protocol on agent side 2090901 - Capital letters in install-config.yaml .platform.baremetal.hosts[].name cause bootkube errors 2090906 - RHACM 2.6.0 images 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2093503 - Assisted service POD keeps crashing after a bare metal host is created 2096106 - Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled 2096445 - Assisted service POD keeps crashing after a bare metal host is created 2096460 - Spoke BMH stuck "inspecting" when deployed via the converged workflow 2097696 - Fix assisted CI jobs that fail for cluster-info readiness 2099277 - Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB 2103703 - Automatic version upgrade triggered for oadp operator installed by cluster-backup-chart 2104117 - Spoke BMH stuck ?available? after changing a BIOS attribute via the converged workflow 2104984 - Infrastructure operator missing clusterrole permissions for interacting with mutatingwebhookconfigurations 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2105339 - Search Application button on the Application Table for Subscription applications does not Redirect 2105357 - [UI] hypershift cluster creation error - n[0] is undefined 2106347 - Submariner error looking up service account submariner-operator/submariner-addon-sa 2106882 - Security Context Restrictions are restricting creation of some pods which affects the deployment of some applications 2107049 - The clusterrole for global clusterset did not created by default 2107065 - governance-policy-framework in CrashLoopBackOff state on spoke cluster: Failed to start manager {"error": "error listening on :8081: listen tcp :8081: bind: address already in use"} 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107370 - Helm Release resource recreation feature does not work with the local cluster 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal 2108888 - Hypershift on AWS - control plane not running 2109370 - The button to create the cluster is not visible 2111203 - Add ocp 4.11 to filters for discovering clusters in ACM 2.6 2111218 - Create cluster - Infrastructure page crashes 2111651 - "View application" button on app table for Flux applications redirects to apiVersion=ocp instead of flux 2111663 - Hosted cluster in Pending import state 2111671 - Leaked namespaces after deleting hypershift deployment 2111770 - [ACM 2.6] there is no node info for remote cluster in multiple hubs 2111843 - The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI 2112180 - The policy page is crashed after input keywords in the search box 2112281 - config-policy-controller pod can't startup in the OCP3.11 managed cluster 2112318 - Can't delete the objects which are re-created by policy when deleting the policy 2112321 - BMAC reconcile loop never stops after changes 2112426 - No cluster discovered due to x509: certificate signed by unknown authority 2112478 - Value of delayAfterRunSeconds is not shown on the final submit panel and the word itself should not be wrapped. 2112793 - Can't view details of the policy template when set the spec.pruneObjectBehavior as unsupported value 2112803 - ClusterServiceVersion for release 2.6 branch references "latest" tag 2113787 - [ACM 2.6] can not delete namespaces after detaching the hosted cluster 2113838 - the cluster proxy-agent was deployed on the non-infra nodes 2113842 - [ACM 2.6] must restart hosting cluster registration pod if update work-manager-addon cr to change installNamespace 2114982 - Control plane type shows 'Standalone' for hypershift cluster 2115622 - Hub fromsecret function doesn't work for hosted mode in multiple hub 2115723 - Can't view details of the policy template for customer and hypershift cluster in hosted mode from UI 2115993 - Policy automation details panel was not updated after editing the mode back to disabled 2116211 - Count of violations with unknown status was not accurate when managed clusters have mixed status 2116329 - cluster-proxy-agent not startup due to the imagepullbackoff on spoke cluster 2117113 - The proxy-server-host was not correct in cluster-proxy-agent 2117187 - pruneObjectBehavior radio selection cannot work well and always switch the first one template in multiple configurationPolicy templates 2117480 - [ACM 2.6] infra-id of HypershiftDeployment doesn't work 2118338 - Report the "namespace not found" error after clicked view yaml link of a policy in the multiple hub env 2119326 - Can't view details of the SecurityContextConstraints policy for managed clusters from UI
- Bugs fixed (https://bugzilla.redhat.com/):
2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
5
Show details on source website{
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
"affected_products": {
"@id": "https://www.variotdbs.pl/ref/affected_products"
},
"configurations": {
"@id": "https://www.variotdbs.pl/ref/configurations"
},
"credits": {
"@id": "https://www.variotdbs.pl/ref/credits"
},
"cvss": {
"@id": "https://www.variotdbs.pl/ref/cvss/"
},
"description": {
"@id": "https://www.variotdbs.pl/ref/description/"
},
"exploit_availability": {
"@id": "https://www.variotdbs.pl/ref/exploit_availability/"
},
"external_ids": {
"@id": "https://www.variotdbs.pl/ref/external_ids/"
},
"iot": {
"@id": "https://www.variotdbs.pl/ref/iot/"
},
"iot_taxonomy": {
"@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
},
"patch": {
"@id": "https://www.variotdbs.pl/ref/patch/"
},
"problemtype_data": {
"@id": "https://www.variotdbs.pl/ref/problemtype_data/"
},
"references": {
"@id": "https://www.variotdbs.pl/ref/references/"
},
"sources": {
"@id": "https://www.variotdbs.pl/ref/sources/"
},
"sources_release_date": {
"@id": "https://www.variotdbs.pl/ref/sources_release_date/"
},
"sources_update_date": {
"@id": "https://www.variotdbs.pl/ref/sources_update_date/"
},
"threat_type": {
"@id": "https://www.variotdbs.pl/ref/threat_type/"
},
"title": {
"@id": "https://www.variotdbs.pl/ref/title/"
},
"type": {
"@id": "https://www.variotdbs.pl/ref/type/"
}
},
"@id": "https://www.variotdbs.pl/vuln/VAR-202205-2059",
"affected_products": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/affected_products#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"model": "macos",
"scope": "lt",
"trust": 1.0,
"vendor": "apple",
"version": "13.0"
},
{
"model": "linux",
"scope": "eq",
"trust": 1.0,
"vendor": "debian",
"version": "10.0"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "35"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "34"
},
{
"model": "vim",
"scope": "lt",
"trust": 1.0,
"vendor": "vim",
"version": "8.0.5023"
},
{
"model": "fedora",
"scope": "eq",
"trust": 1.0,
"vendor": "fedoraproject",
"version": "36"
}
],
"sources": [
{
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"credits": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/credits#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat",
"sources": [
{
"db": "PACKETSTORM",
"id": "168516"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "168139"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168284"
}
],
"trust": 0.8
},
"cve": "CVE-2022-1897",
"cvss": {
"@context": {
"cvssV2": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
},
"cvssV3": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
},
"severity": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/cvss/severity#"
},
"@id": "https://www.variotdbs.pl/ref/cvss/severity"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
},
"@id": "https://www.variotdbs.pl/ref/sources"
}
},
"data": [
{
"cvssV2": [
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "nvd@nist.gov",
"availabilityImpact": "PARTIAL",
"baseScore": 6.8,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 8.6,
"id": "CVE-2022-1897",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 1.1,
"vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
{
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"author": "VULHUB",
"availabilityImpact": "PARTIAL",
"baseScore": 6.8,
"confidentialityImpact": "PARTIAL",
"exploitabilityScore": 8.6,
"id": "VHN-423551",
"impactScore": 6.4,
"integrityImpact": "PARTIAL",
"severity": "MEDIUM",
"trust": 0.1,
"vectorString": "AV:N/AC:M/AU:N/C:P/I:P/A:P",
"version": "2.0"
}
],
"cvssV3": [
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "nvd@nist.gov",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2022-1897",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"version": "3.1"
},
{
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"author": "security@huntr.dev",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"exploitabilityScore": 1.8,
"id": "CVE-2022-1897",
"impactScore": 5.9,
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"trust": 1.0,
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"version": "3.0"
}
],
"severity": [
{
"author": "nvd@nist.gov",
"id": "CVE-2022-1897",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "security@huntr.dev",
"id": "CVE-2022-1897",
"trust": 1.0,
"value": "HIGH"
},
{
"author": "VULHUB",
"id": "VHN-423551",
"trust": 0.1,
"value": "MEDIUM"
},
{
"author": "VULMON",
"id": "CVE-2022-1897",
"trust": 0.1,
"value": "MEDIUM"
}
]
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-423551"
},
{
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"db": "NVD",
"id": "CVE-2022-1897"
},
{
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"description": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/description#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Out-of-bounds Write in GitHub repository vim/vim prior to 8.2. Vim is a cross-platform text editor. There is a security vulnerability in versions prior to Vim 8.2, which is caused by an out-of-bounds write problem. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: RHACS 3.72 enhancement and security update\nAdvisory ID: RHSA-2022:6714-01\nProduct: RHACS\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6714\nIssue date: 2022-09-26\nCVE Names: CVE-2015-20107 CVE-2022-0391 CVE-2022-1292 \n CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 \n CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 \n CVE-2022-24675 CVE-2022-24921 CVE-2022-28327 \n CVE-2022-29154 CVE-2022-29526 CVE-2022-30631 \n CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 \n=====================================================================\n\n1. Summary:\n\nUpdated images are now available for Red Hat Advanced Cluster Security for\nKubernetes (RHACS). The updated image includes new features and bug fixes. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRelease of RHACS 3.72 provides these changes:\n\nNew features\n* Automatic removal of nonactive clusters from RHACS: RHACS provides the\nability to configure your system to automatically remove nonactive clusters\nfrom RHACS so that you can monitor active clusters only. \n* Support for unauthenticated email integration: RHACS now supports\nunauthenticated SMTP for email integrations. This is insecure and not\nrecommended. \n* Support for Quay robot accounts: RHACS now supports use of robot accounts\nin quay.io integrations. You can create robot accounts in Quay that allow\nyou to share credentials for use in multiple repositories. \n* Ability to view Dockerfile lines in images that introduced components\nwith Common Vulnerabilities and Exposures (CVEs): In the Images view, under\nImage Findings, you can view individual lines in the Dockerfile that\nintroduced the components that have been identified as containing CVEs. \n* Network graph improvements: RHACS 3.72 includes some improvements to the\nNetwork Graph user interface. \n\nKnown issue\n* RHACS shows the wrong severity when two severities exist for a single\nvulnerability in a single distribution. This issue occurs because RHACS\nscopes severities by namespace rather than component. There is no\nworkaround. It is anticipated that an upcoming release will include a fix\nfor this issue. (ROX-12527)\n\nBug fixes\n* Before this update, the steps to configure OpenShift Container Platform\nOAuth for more than one URI were missing. The documentation has been\nrevised to include instructions for configuring OAuth in OpenShift\nContainer Platform to use more than one URI. For more information, see\nCreating additional routes for the OpenShift Container Platform OAuth\nserver. (ROX-11296)\n* Before this update, the autogenerated image integration, such as a Docker\nregistry integration, for a cluster is not deleted when the cluster is\nremoved from Central. This issue is fixed. (ROX-9398)\n* Before this update, the Image OS policy criteria did not support regular\nexpressions, or regex. However, the documentation indicated that regular\nexpressions were supported. This issue is fixed by adding support for\nregular expressions for the Image OS policy criteria. (ROX-12301)\n* Before this update, the syslog integration did not respect a configured\nTCP proxy. This is now fixed. \n* Before this update, the scanner-db pod failed to start when a resource\nquota was set for the stackrox namespace, because the init-db container in\nthe pod did not have any resources assigned to it. The init-db container\nfor ScannerDB now specifies resource requests and limits that match the db\ncontainer. (ROX-12291)\n\nNotable technical changes\n* Scanning support for Red Hat Enterprise Linux 9: RHEL 9 is now generally\navailable (GA). RHACS 3.72 introduces support for analyzing images built\nwith Red Hat Universal Base Image (UBI) 9 and Red Hat Enterprise Linux\n(RHEL) 9 RPMs for vulnerabilities. \n* Policy for CVEs with fixable CVSS of 6 or greater disabled by default:\nBeginning with this release, the Fixable CVSS \u003e= 6 and Privileged policy is\nno longer enabled by default for new RHACS installations. The configuration\nof this policy is not changed when upgrading an existing system. A new\npolicy Privileged Containers with Important and Critical Fixable CVEs,\nwhich gives an alert for containers running in privileged mode that have\nimportant or critical fixable vulnerabilities, has been added. \n\nSecurity Fix(es)\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nTo take advantage of the new features, bug fixes, and enhancements in RHACS\n3.72 you are advised to upgrade to RHACS 3.72.0. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nROX-12799 - Release RHACS 3.72.0\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2015-20107\nhttps://access.redhat.com/security/cve/CVE-2022-0391\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29154\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-30631\nhttps://access.redhat.com/security/cve/CVE-2022-32206\nhttps://access.redhat.com/security/cve/CVE-2022-32208\nhttps://access.redhat.com/security/cve/CVE-2022-34903\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://docs.openshift.com/acs/3.72/release_notes/372-release-notes.html\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYzH0ItzjgjWX9erEAQg2Yg//fDLYNktH9vd06FrD5L77TeiYnD/Zx+f5\nfk12roODKMOpcV6BmnOyPG0a6POCmhHn1Dn6bOT+7Awx0b9A9cXXDk6jytkpDhh7\nO0OxzWZVVvSzNe1TL3WN9vwZqSpAYON8euLBEb16E8pmEv7vXKll3wMQIlctp6Nr\ney6DLL718z8ghXbtkkcGsBQqElM4jESvGm5xByMymfRFktvy9LSgTi+Zc7FY7gXL\nAHitJZiSm57D/pwUHvNltLLkxQfVAGuJXaTHYFyeIi6Z2pdDySYAXcr60mVd6eSh\n9/7qGwdsQARwmr174s0xMWRcns6UDvwIWifiXl6FUnTZFlia+lC3xIP1o2CXwoFP\nFr7LpF0L9h5BapjSRv1w6qkkJIyJhw5v9VmZQoQ3joZqRQi0I6qLOcp92eik63pM\ni11ppoeDNwjpSST40Ema3j9PflzxXB7PKBUfKWwqNc2dnWDkiEhNaXOAZ7MqgdLo\nMB3enlKV4deeWOb5OA1Vlv/lAAJM0h5AOgTIBddYs3CDsyoK9fKm1UF/BEhcWMyr\nkV3AJ0/zzAK6ev4hQmP8Ug4SbdiHNdM3X1vgH54OVJ3Al3E1nAEyYmELNUITrvXV\njJI5thbVwK78vOX9yWcmpZm879BnHnUPzGbS0lF5FVJOSZ8E7LvOE7lCM/dg094z\n0riGwT9O9Ys=\n=hArw\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2115198 - build ceph containers for RHCS 5.2 release\n\n5. ==========================================================================\nUbuntu Security Notice USN-5507-1\nJuly 08, 2022\n\nvim vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n\nSummary:\n\nSeveral security issues were fixed in Vim. \n\nSoftware Description:\n- vim: Vi IMproved - enhanced vi editor\n\nDetails:\n\nIt was discovered that Vim incorrectly handled memory access. An \nattacker could potentially use this issue to cause the program to crash, \nuse unexpected values, or execute arbitrary code. (CVE-2022-1968)\n\nIt was discovered that Vim incorrectly handled memory access. An \nattacker could potentially use this issue to cause the corruption of \nsensitive information, a crash, or arbitrary code execution. \n(CVE-2022-1897, CVE-2022-1942)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n vim 2:7.4.1689-3ubuntu1.5+esm10\n\nIn general, a standard system update will make all the necessary changes. Summary:\n\nAn update is now available for RHOL-5.5-RHEL-8. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2058404 - CVE-2022-0759 kubeclient: kubeconfig parsing error can lead to MITM attacks\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1415 - Allow users to tune fluentd\nLOG-1539 - Events and CLO csv are not collected after running `oc adm must-gather --image=$downstream-clo-image `\nLOG-1713 - Reduce Permissions granted for prometheus-k8s service account\nLOG-2063 - Collector pods fail to start when a Vector only Cluster Logging instance is created. \nLOG-2134 - The infra logs are sent to app-xx indices\nLOG-2159 - Cluster Logging Pods in CrashLoopBackOff\nLOG-2165 - [Vector] Default log level debug makes it hard to find useful error/failure messages. \nLOG-2167 - [Vector] Collector pods fails to start with configuration error when using Kafka SASL over SSL\nLOG-2169 - [Vector] Logs not being sent to Kafka with SASL plaintext. \nLOG-2172 - [vector]The openshift-apiserver and ovn audit logs can not be collected. \nLOG-2242 - Log file metric exporter is still following /var/log/containers files. \nLOG-2243 - grafana-dashboard-cluster-logging should be deleted once clusterlogging/instance was removed\nLOG-2264 - Logging link should contain an icon\nLOG-2274 - [Logging 5.5] EO doesn\u0027t recreate secrets kibana and kibana-proxy after removing them. \nLOG-2276 - Fluent config format is hard to read via configmap\nLOG-2290 - ClusterLogging Instance status in not getting updated in UI\nLOG-2291 - [release-5.5] Events listing out of order in Kibana 6.8.1\nLOG-2294 - [Vector] Vector internal metrics are not exposed via HTTPS due to which OpenShift Monitoring Prometheus service cannot scrape the metrics endpoint. \nLOG-2300 - [Logging 5.5]ES pods can\u0027t be ready after removing secret/signing-elasticsearch\nLOG-2303 - [Logging 5.5] Elasticsearch cluster upgrade stuck\nLOG-2308 - configmap grafana-dashboard-elasticsearch is being created and deleted continously\nLOG-2333 - Journal logs not reaching Elasticsearch output\nLOG-2337 - [Vector] Missing @ prefix from the timestamp field in log record. \nLOG-2342 - [Logging 5.5] Kibana pod can\u0027t connect to ES cluster after removing secret/signing-elasticsearch: \"x509: certificate signed by unknown authority\"\nLOG-2384 - Provide a method to get authenticated from GCP\nLOG-2411 - [Vector] Audit logs forwarding not working. \nLOG-2412 - CLO\u0027s loki output url is parsed wrongly\nLOG-2413 - PriorityClass cluster-logging is deleted if provide an invalid log type\nLOG-2418 - EO supported time units don\u0027t match the units specified in CRDs. \nLOG-2439 - Telemetry: the managedStatus\u0026healthStatus\u0026version values are wrong\nLOG-2440 - [loki-operator] Live tail of logs does not work on OpenShift\nLOG-2444 - The write index is removed when `the size of the index` \u003e `diskThresholdPercent% * total size`. \nLOG-2460 - [Vector] Collector pods fail to start on a FIPS enabled cluster. \nLOG-2461 - [Vector] Vector auth config not generated when user provided bearer token is used in a secret for connecting to LokiStack. \nLOG-2463 - Elasticsearch operator repeatedly prints error message when checking indices\nLOG-2474 - EO shouldn\u0027t grant cluster-wide permission to system:serviceaccount:openshift-monitoring:prometheus-k8s when ES cluster is deployed. [openshift-logging 5.5]\nLOG-2522 - CLO supported time units don\u0027t match the units specified in CRDs. \nLOG-2525 - The container\u0027s logs are not sent to separate index if the annotation is added after the pod is ready. \nLOG-2546 - TLS handshake error on loki-gateway for FIPS cluster\nLOG-2549 - [Vector] [master] Journald logs not sent to the Log store when using Vector as collector. \nLOG-2554 - [Vector] [master] Fallback index is not used when structuredTypeKey is missing from JSON log data\nLOG-2588 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-2596 - [vector]the condition in [transforms.route_container_logs] is inaccurate\nLOG-2599 - Supported values for level field don\u0027t match documentation\nLOG-2605 - $labels.instance is empty in the message when firing FluentdNodeDown alert\nLOG-2609 - fluentd and vector are unable to ship logs to elasticsearch when cluster-wide proxy is in effect\nLOG-2619 - containers violate PodSecurity -- Log Exporation\nLOG-2627 - containers violate PodSecurity -- Loki\nLOG-2649 - Level Critical should match the beginning of the line as the other levels\nLOG-2656 - Logging uses deprecated v1beta1 apis\nLOG-2664 - Deprecated Feature logs causing too much noise\nLOG-2665 - [Logging 5.5] Sometimes collector fails to push logs to Elasticsearch cluster\nLOG-2693 - Integration with Jaeger fails for ServiceMonitor\nLOG-2700 - [Vector] vector container can\u0027t start due to \"unknown field `pod_annotation_fields`\" . \nLOG-2703 - Collector DaemonSet is not removed when CLF is deleted for fluentd/vector only CL instance\nLOG-2725 - Upgrade logging-eventrouter Golang version and tags\nLOG-2731 - CLO keeps reporting `Reconcile ServiceMonitor retry error` and `Reconcile Service retry error` after creating clusterlogging. \nLOG-2732 - Prometheus Operator pod throws \u0027skipping servicemonitor\u0027 error on Jaeger integration\nLOG-2742 - unrecognized outputs when use the sts role secret\nLOG-2746 - CloudWatch forwarding rejecting large log events, fills tmpfs\nLOG-2749 - OpenShift Logging Dashboard for Elastic Shards shows \"active_primary\" instead of \"active\" shards. \nLOG-2753 - Update Grafana configuration for LokiStack integration on grafana/loki repo\nLOG-2763 - [Vector]{Master} Vector\u0027s healthcheck fails when forwarding logs to Lokistack. \nLOG-2764 - ElasticSearch operator does not respect referencePolicy when selecting oauth-proxy image\nLOG-2765 - ingester pod can not be started in IPv6 cluster\nLOG-2766 - [vector] failed to parse cluster url: invalid authority IPv6 http-proxy\nLOG-2772 - arn validation failed when role_arn=arn:aws-us-gov:xxx\nLOG-2773 - No cluster-logging-operator-metrics service in logging 5.5\nLOG-2778 - [Vector] [OCP 4.11] SA token not added to Vector config when connecting to LokiStack instance without CLF creds secret required by LokiStack. \nLOG-2784 - Japanese log messages are garbled at Kibana\nLOG-2793 - [Vector] OVN audit logs are missing the level field. \nLOG-2864 - [vector] Can not sent logs to default when loki is the default output in CLF\nLOG-2867 - [fluentd] All logs are sent to application tenant when loki is used as default logstore in CLF. \nLOG-2873 - [Vector] Cannot configure CPU/Memory requests/limits when using Vector as collector. \nLOG-2875 - Seeing a black rectangle box on the graph in Logs view\nLOG-2876 - The link to the \u0027Container details\u0027 page on the \u0027Logs\u0027 screen throws error\nLOG-2877 - When there is no query entered, seeing error message on the Logs view\nLOG-2882 - RefreshIntervalDropdown and TimeRangeDropdown always set back to its original values when switching between pages in \u0027Logs\u0027 screen\n\n6. Description:\n\nOpenShift sandboxed containers support for OpenShift Container Platform\nprovides users with built-in support for running Kata containers as an\nadditional, optional runtime. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:6102\n\nSpace precludes documenting all of the container images in this advisory. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.1-x86_64\n\nThe image digest is\nsha256:97410a5db655a9d3017b735c2c0747c849d09ff551765e49d5272b80c024a844\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.1-s390x\n\nThe image digest is\nsha256:13734de7e796e46f5403ef9ee918be88c12fdc9b73acb8777e0cc7c56a276794\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.1-ppc64le\n\nThe image digest is\nsha256:d0019b6b8b32cc9fea06562e6ce175086fa7de7b2b7dce171a8ac1a57f92f10b\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.1-aarch64\n\nThe image digest is\nsha256:3394a79e173ac17bc96a7256665701d3d7e2a95535a12f2ceb19ceb41dcd6b79\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2033256 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_route_table.private_routes resource\n2040715 - post 1.23 rebase: regression in service-load balancer reliability\n2063622 - Failed to install the podman package from repo rhocp-4.10-for-rhel-8-x86_64-rpms\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2102576 - [4.11] [Cluster storage Operator] DefaultStorageClassController report fake message \"No default StorageClass for this platform\" on azure and openstack\n2103638 - No need to pass to-image-base for `oc adm release new` command when use --from-release\n2103899 - [OVN] bonding fails after active-backup fail-over and reboot, kargs static IP\n2104386 - OVS-Configure doesn\u0027t iterate connection names containing spaces correctly\n2104435 - [dpu-network-operator] Updating images to be consistent with ART\n2104510 - Update ose-machine-config-operator images to be consistent with ART\n2104687 - MCP upgrades can stall waiting for master node reboots since MCC no longer gets drained\n2105056 - Openshift-Ansible RHEL 8 CI update\n2105444 - [OVN] Node to service traffic is blocked if service is \"internalTrafficPolicy: Local\" even backed pod is on the same node\n2106772 - openshift4/ose-operator-registry image is vulnerable to multiple CVEs\n2106795 - crio umask sometimes set to 0000\n2107003 - The bash completion doesn\u0027t work for get subcommand\n2107045 - OLM updates namespace labels even if they haven\u0027t changed\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107777 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2107871 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2108021 - Machine Controller stuck with Terminated Instances while Provisioning on AWS\n2109052 - Add to application dropdown options are not visible on application-grouping sidebar action dropdown. \n2109205 - HTTPS_PROXY ENV missing in some CSI driver operators\n2109270 - Kube controllers crash when nodes are shut off in OpenStack\n2109489 - Reply to arp requests on interfaces with no ip\n2109709 - Namespace value is missing on the list when selecting \"All namespaces\" for operators\n2109731 - alertmanager-main pods failing to start due to startupprobe timeout\n2109866 - Cannot delete a Machine if a VM got stuck in ERROR\n2109977 - storageclass should not be created for unsupported vsphere version\n2110482 - [vsphere] failed to create cluster if datacenter is embedded in a Folder\n2110723 - openshift-tests: allow -f to match tests for any test suite\n2110737 - Master node in SchedulingDisabled after upgrade from 4.10.24 -\u003e 4.11.0-rc.4\n2111037 - Affinity rule created in console deployment for single-replica infrastructure\n2111347 - dummy bug for 4.10.z bz2111335\n2111471 - Node internal DNS address is not set for machine\n2111475 - Fetch internal IPs of vms from dhcp server\n2111587 - [4.11] Export OVS metrics\n2111619 - Pods are unable to reach clusterIP services, ovn-controller isn\u0027t installing the group mod flows correctly\n2111992 - OpenShift controller manager needs permissions to get/create/update leases for leader election\n2112297 - bond-cni: Backport \"mac duplicates\" 4.11\n2112353 - lifecycle.posStart hook does not have network connectivity. \n2112908 - Search resource \"virtualmachine\" in \"Home -\u003e Search\" crashes the console\n2112912 - sum_irate doesn\u0027t work in OCP 4.8\n2113926 - hypershift cluster deployment hang due to nil pointer dereference for hostedControlPlane.Spec.Etcd.Managed\n2113938 - Fix e2e tests for [reboots][machine_config_labels] (tsc=nowatchdog)\n2114574 - can not upgrade. Incorrect reading of olm.maxOpenShiftVersion\n2114602 - Upgrade failing because restrictive scc is injected into version pod\n2114964 - kola dhcp.propagation test failing\n2115315 - README file for helm charts coded in Chinese shows messy characters when viewing in developer perspective. \n2115435 - [4.11] INIT container stuck forever\n2115564 - ClusterVersion availableUpdates is stale: PromQL conditional risks vs. slow/stuck Thanos\n2115817 - Updates / config metrics are not available in 4.11\n2116009 - Node Tuning Operator(NTO) - OCP upgrade failed due to node-tuning CO still progressing\n2116557 - Order of config attributes are not maintained during conversion of PT4l from ptpconfig to ptp4l.0.config file\n2117223 - kubernetes-nmstate-operator fails to install with error \"no channel heads (entries not replaced by another entry) found in channel\"\n2117324 - catalog-operator fatal error: concurrent map writes\n2117353 - kola dhcp.propagation test out of memory\n2117370 - Migrate openshift-ansible to ansible-core\n2117746 - Bump to latest k8s.io 1.24 release\n2118214 - dummy bug for 4.10.z bz2118209\n2118375 - pass the \"--quiet\" option via the buildconfig for s2i\n\n5. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\n* CVE-2022-1705 golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n\n* CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\nBug fixes:\n\n* assisted-service repo pin-latest.py script should allow custom tags to be\npinned (BZ# 2065661)\n\n* assisted-service-build image is too big in size (BZ# 2066059)\n\n* assisted-service pin-latest.py script should exclude the postgres image\n(BZ# 2076901)\n\n* PXE artifacts need to be served via HTTP (BZ# 2078531)\n\n* Implementing new service-agent protocol on agent side (BZ# 2081281)\n\n* RHACM 2.6.0 images (BZ# 2090906)\n\n* Assisted service POD keeps crashing after a bare metal host is created\n(BZ# 2093503)\n\n* Assisted service triggers the worker nodes re-provisioning on the hub\ncluster when the converged flow is enabled (BZ# 2096106)\n\n* Fix assisted CI jobs that fail for cluster-info readiness (BZ# 2097696)\n\n* Nodes are required to have installation disks of at least 120GB instead\nof at minimum of 100GB (BZ# 2099277)\n\n* The pre-selected search keyword is not readable (BZ# 2107736)\n\n* The value of label expressions in the new placement for policy and\npolicysets cannot be shown real-time from UI (BZ# 2111843)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2065661 - assisted-service repo pin-latest.py script should allow custom tags to be pinned\n2066059 - assisted-service-build image is too big in size\n2076901 - assisted-service pin-latest.py script should exclude the postgres image\n2078531 - iPXE artifacts need to be served via HTTP\n2081281 - Implementing new service-agent protocol on agent side\n2090901 - Capital letters in install-config.yaml .platform.baremetal.hosts[].name cause bootkube errors\n2090906 - RHACM 2.6.0 images\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2093503 - Assisted service POD keeps crashing after a bare metal host is created\n2096106 - Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled\n2096445 - Assisted service POD keeps crashing after a bare metal host is created\n2096460 - Spoke BMH stuck \"inspecting\" when deployed via the converged workflow\n2097696 - Fix assisted CI jobs that fail for cluster-info readiness\n2099277 - Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB\n2103703 - Automatic version upgrade triggered for oadp operator installed by cluster-backup-chart\n2104117 - Spoke BMH stuck ?available? after changing a BIOS attribute via the converged workflow\n2104984 - Infrastructure operator missing clusterrole permissions for interacting with mutatingwebhookconfigurations\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2105339 - Search Application button on the Application Table for Subscription applications does not Redirect\n2105357 - [UI] hypershift cluster creation error - n[0] is undefined\n2106347 - Submariner error looking up service account submariner-operator/submariner-addon-sa\n2106882 - Security Context Restrictions are restricting creation of some pods which affects the deployment of some applications\n2107049 - The clusterrole for global clusterset did not created by default\n2107065 - governance-policy-framework in CrashLoopBackOff state on spoke cluster: Failed to start manager {\"error\": \"error listening on :8081: listen tcp :8081: bind: address already in use\"}\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107370 - Helm Release resource recreation feature does not work with the local cluster\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108888 - Hypershift on AWS - control plane not running\n2109370 - The button to create the cluster is not visible\n2111203 - Add ocp 4.11 to filters for discovering clusters in ACM 2.6\n2111218 - Create cluster - Infrastructure page crashes\n2111651 - \"View application\" button on app table for Flux applications redirects to apiVersion=ocp instead of flux\n2111663 - Hosted cluster in Pending import state\n2111671 - Leaked namespaces after deleting hypershift deployment\n2111770 - [ACM 2.6] there is no node info for remote cluster in multiple hubs\n2111843 - The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI\n2112180 - The policy page is crashed after input keywords in the search box\n2112281 - config-policy-controller pod can\u0027t startup in the OCP3.11 managed cluster\n2112318 - Can\u0027t delete the objects which are re-created by policy when deleting the policy\n2112321 - BMAC reconcile loop never stops after changes\n2112426 - No cluster discovered due to x509: certificate signed by unknown authority\n2112478 - Value of delayAfterRunSeconds is not shown on the final submit panel and the word itself should not be wrapped. \n2112793 - Can\u0027t view details of the policy template when set the spec.pruneObjectBehavior as unsupported value\n2112803 - ClusterServiceVersion for release 2.6 branch references \"latest\" tag\n2113787 - [ACM 2.6] can not delete namespaces after detaching the hosted cluster\n2113838 - the cluster proxy-agent was deployed on the non-infra nodes\n2113842 - [ACM 2.6] must restart hosting cluster registration pod if update work-manager-addon cr to change installNamespace\n2114982 - Control plane type shows \u0027Standalone\u0027 for hypershift cluster\n2115622 - Hub fromsecret function doesn\u0027t work for hosted mode in multiple hub\n2115723 - Can\u0027t view details of the policy template for customer and hypershift cluster in hosted mode from UI\n2115993 - Policy automation details panel was not updated after editing the mode back to disabled\n2116211 - Count of violations with unknown status was not accurate when managed clusters have mixed status\n2116329 - cluster-proxy-agent not startup due to the imagepullbackoff on spoke cluster\n2117113 - The proxy-server-host was not correct in cluster-proxy-agent\n2117187 - pruneObjectBehavior radio selection cannot work well and always switch the first one template in multiple configurationPolicy templates\n2117480 - [ACM 2.6] infra-id of HypershiftDeployment doesn\u0027t work\n2118338 - Report the \"namespace not found\" error after clicked view yaml link of a policy in the multiple hub env\n2119326 - Can\u0027t view details of the SecurityContextConstraints policy for managed clusters from UI\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n\n5",
"sources": [
{
"db": "NVD",
"id": "CVE-2022-1897"
},
{
"db": "VULHUB",
"id": "VHN-423551"
},
{
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"db": "PACKETSTORM",
"id": "168516"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "167729"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "168139"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168284"
}
],
"trust": 1.89
},
"external_ids": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/external_ids#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"db": "NVD",
"id": "CVE-2022-1897",
"trust": 2.1
},
{
"db": "PACKETSTORM",
"id": "168516",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "167729",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168112",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168289",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168287",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168139",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168284",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "169443",
"trust": 0.2
},
{
"db": "PACKETSTORM",
"id": "168022",
"trust": 0.2
},
{
"db": "CNVD",
"id": "CNVD-2022-50690",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "170083",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "167944",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168150",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168538",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168378",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168182",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168222",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168013",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "169435",
"trust": 0.1
},
{
"db": "PACKETSTORM",
"id": "168213",
"trust": 0.1
},
{
"db": "CNNVD",
"id": "CNNVD-202205-4246",
"trust": 0.1
},
{
"db": "VULHUB",
"id": "VHN-423551",
"trust": 0.1
},
{
"db": "VULMON",
"id": "CVE-2022-1897",
"trust": 0.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-423551"
},
{
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"db": "PACKETSTORM",
"id": "168516"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "167729"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "168139"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168284"
},
{
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"id": "VAR-202205-2059",
"iot": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/iot#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": true,
"sources": [
{
"db": "VULHUB",
"id": "VHN-423551"
}
],
"trust": 0.01
},
"last_update_date": "2025-12-22T22:49:48.637000Z",
"patch": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/patch#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"title": "Ubuntu Security Notice: USN-5507-1: Vim vulnerabilities",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5507-1"
},
{
"title": "Red Hat: Moderate: vim security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225942 - Security Advisory"
},
{
"title": "Red Hat: Moderate: vim security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225813 - Security Advisory"
},
{
"title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226184 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226103 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226182 - Security Advisory"
},
{
"title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226051 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226283 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226183 - Security Advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226507 - Security Advisory"
},
{
"title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227055 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227058 - Security Advisory"
},
{
"title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226024 - Security Advisory"
},
{
"title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226714 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226370 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226271 - Security Advisory"
},
{
"title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226696 - Security Advisory"
},
{
"title": "Debian CVElist Bug Report Logs: vim: CVE-2022-1942 CVE-2022-1968 CVE-2022-2000 CVE-2022-2124 CVE-2022-2125 CVE-2022-2126 CVE-2022-2129 CVE-2022-2285 CVE-2022-2288 CVE-2022-2304 CVE-2022-2207 CVE-2022-1616 CVE-2022-1619 CVE-2022-1621 CVE-2022-1720 CVE-2022-1785 CVE-2022-1851 CVE-2022-1897 CVE-2022-1898",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=11dbcf77118f7ec64d0ef6c1e3c087e3"
},
{
"title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226156 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Container Platform 4.11.45 bug fix and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20234053 - Security Advisory"
},
{
"title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228750 - Security Advisory"
},
{
"title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
},
{
"title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230408 - Security Advisory"
},
{
"title": "Amazon Linux AMI: ALAS-2022-1628",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1628"
},
{
"title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228889 - Security Advisory"
},
{
"title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228781 - Security Advisory"
},
{
"title": "Amazon Linux 2: ALAS2-2022-1829",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1829"
},
{
"title": "Amazon Linux 2022: ALAS2022-2022-116",
"trust": 0.1,
"url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-116"
}
],
"sources": [
{
"db": "VULMON",
"id": "CVE-2022-1897"
}
]
},
"problemtype_data": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"problemtype": "CWE-787",
"trust": 1.1
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-423551"
},
{
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"references": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/references#",
"data": {
"@container": "@list"
},
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": [
{
"trust": 1.2,
"url": "https://support.apple.com/kb/ht213488"
},
{
"trust": 1.2,
"url": "https://huntr.dev/bounties/82c12151-c283-40cf-aa05-2e39efa89118"
},
{
"trust": 1.2,
"url": "http://seclists.org/fulldisclosure/2022/oct/28"
},
{
"trust": 1.2,
"url": "http://seclists.org/fulldisclosure/2022/oct/41"
},
{
"trust": 1.2,
"url": "https://security.gentoo.org/glsa/202208-32"
},
{
"trust": 1.2,
"url": "https://github.com/vim/vim/commit/338f1fc0ee3ca929387448fe464579d6113fa76a"
},
{
"trust": 1.2,
"url": "https://lists.debian.org/debian-lts-announce/2022/11/msg00032.html"
},
{
"trust": 1.1,
"url": "https://security.gentoo.org/glsa/202305-16"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qmfhbc5oqxdpv2sdya2juqgvcpyastjb/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/ozslfikfyu5y2km5ejkqnyhwrubdq4gj/"
},
{
"trust": 1.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/tynk6sdcmolqjoi3b4aoe66p2g2ih4zm/"
},
{
"trust": 1.0,
"url": "https://lists.debian.org/debian-lts-announce/2025/03/msg00023.html"
},
{
"trust": 0.9,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-2097"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/team/contact/"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1292"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1586"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-2068"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1785"
},
{
"trust": 0.8,
"url": "https://bugzilla.redhat.com/):"
},
{
"trust": 0.8,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
},
{
"trust": 0.8,
"url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1897"
},
{
"trust": 0.8,
"url": "https://access.redhat.com/security/cve/cve-2022-1927"
},
{
"trust": 0.7,
"url": "https://access.redhat.com/security/updates/classification/#moderate"
},
{
"trust": 0.6,
"url": "https://access.redhat.com/security/cve/cve-2022-30631"
},
{
"trust": 0.5,
"url": "https://issues.jboss.org/):"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-32206"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-32208"
},
{
"trust": 0.5,
"url": "https://access.redhat.com/security/cve/cve-2022-29154"
},
{
"trust": 0.5,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
},
{
"trust": 0.4,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-32250"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-1012"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
},
{
"trust": 0.3,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
},
{
"trust": 0.3,
"url": "https://access.redhat.com/security/cve/cve-2022-2526"
},
{
"trust": 0.2,
"url": "https://ubuntu.com/security/notices/usn-5507-1"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2015-20107"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-24675"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-0391"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-34903"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30632"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-30629"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-32148"
},
{
"trust": 0.2,
"url": "https://access.redhat.com/security/cve/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
},
{
"trust": 0.2,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qmfhbc5oqxdpv2sdya2juqgvcpyastjb/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/tynk6sdcmolqjoi3b4aoe66p2g2ih4zm/"
},
{
"trust": 0.1,
"url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ozslfikfyu5y2km5ejkqnyhwrubdq4gj/"
},
{
"trust": 0.1,
"url": "https://cwe.mitre.org/data/definitions/787.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov"
},
{
"trust": 0.1,
"url": "https://alas.aws.amazon.com/alas-2022-1628.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6714"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/acs/3.72/release_notes/372-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-29526"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-24921"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28327"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27782"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27776"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-22576"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-43813"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/1548993"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-27774"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0670"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-40528"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/2789521"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-29824"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6024"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1968"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/articles/11258"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-0759"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/updates/classification/#important"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-0759"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6051"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/sandboxed_containers/sandboxed-containers-release-notes.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:7058"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/latest/sandboxed_containers/upgrade-sandboxed-containers.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-2832"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-40674"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-2832"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6103"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6102"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6370"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30630"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30635"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-30633"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-31129"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/security/cve/cve-2022-1962"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6182"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/logging/cluster-logging-release-notes.html"
},
{
"trust": 0.1,
"url": "https://nvd.nist.gov/vuln/detail/cve-2022-32148"
},
{
"trust": 0.1,
"url": "https://docs.openshift.com/container-platform/4.10/logging/cluster-logging-upgrading.html"
},
{
"trust": 0.1,
"url": "https://access.redhat.com/errata/rhsa-2022:6183"
}
],
"sources": [
{
"db": "VULHUB",
"id": "VHN-423551"
},
{
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"db": "PACKETSTORM",
"id": "168516"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "167729"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "168139"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168284"
},
{
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"sources": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#",
"data": {
"@container": "@list"
}
},
"data": [
{
"db": "VULHUB",
"id": "VHN-423551"
},
{
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"db": "PACKETSTORM",
"id": "168516"
},
{
"db": "PACKETSTORM",
"id": "168022"
},
{
"db": "PACKETSTORM",
"id": "167729"
},
{
"db": "PACKETSTORM",
"id": "168112"
},
{
"db": "PACKETSTORM",
"id": "169443"
},
{
"db": "PACKETSTORM",
"id": "168139"
},
{
"db": "PACKETSTORM",
"id": "168287"
},
{
"db": "PACKETSTORM",
"id": "168289"
},
{
"db": "PACKETSTORM",
"id": "168284"
},
{
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"sources_release_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-05-27T00:00:00",
"db": "VULHUB",
"id": "VHN-423551"
},
{
"date": "2022-05-27T00:00:00",
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"date": "2022-09-27T15:41:11",
"db": "PACKETSTORM",
"id": "168516"
},
{
"date": "2022-08-10T15:50:41",
"db": "PACKETSTORM",
"id": "168022"
},
{
"date": "2022-07-11T14:35:31",
"db": "PACKETSTORM",
"id": "167729"
},
{
"date": "2022-08-19T15:03:34",
"db": "PACKETSTORM",
"id": "168112"
},
{
"date": "2022-10-20T14:21:57",
"db": "PACKETSTORM",
"id": "169443"
},
{
"date": "2022-08-24T13:06:10",
"db": "PACKETSTORM",
"id": "168139"
},
{
"date": "2022-09-07T17:07:14",
"db": "PACKETSTORM",
"id": "168287"
},
{
"date": "2022-09-07T17:09:04",
"db": "PACKETSTORM",
"id": "168289"
},
{
"date": "2022-09-07T16:57:47",
"db": "PACKETSTORM",
"id": "168284"
},
{
"date": "2022-05-27T15:15:07.620000",
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"sources_update_date": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
"data": {
"@container": "@list"
}
},
"data": [
{
"date": "2022-12-03T00:00:00",
"db": "VULHUB",
"id": "VHN-423551"
},
{
"date": "2023-11-07T00:00:00",
"db": "VULMON",
"id": "CVE-2022-1897"
},
{
"date": "2025-11-03T21:15:50.817000",
"db": "NVD",
"id": "CVE-2022-1897"
}
]
},
"title": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/title#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "Red Hat Security Advisory 2022-6714-01",
"sources": [
{
"db": "PACKETSTORM",
"id": "168516"
}
],
"trust": 0.1
},
"type": {
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/type#",
"sources": {
"@container": "@list",
"@context": {
"@vocab": "https://www.variotdbs.pl/ref/sources#"
}
}
},
"data": "arbitrary, code execution",
"sources": [
{
"db": "PACKETSTORM",
"id": "167729"
}
],
"trust": 0.1
}
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.