Producent oprogramowania do wirtualizacji VMware opublikował najnowszą wersję produktu vCenter Server o oznaczeniu wersji 8.0U1. Dzięki aktualizacji, wprowadzono usprawnienia w zakresie wydajności i stabilności serwera vCenter, w tym zmniejszenie zużycia pamięci RAM przez procesy vCenter. Ponadto od wersji 8.0U1, dodano funkcję weryfikacji połączenia między serwerami vCenter w klastrze. Po więcej informacji zapraszam do dalszej części artykułu.
Rozwiązane problemy:
Security Features Issues
-
Enabling TLS 1.0 on ESXi 8.0 hosts causes connection drops
ESXi 8.0 supports OpenSSL 3.0 and the only TLS protocol enabled by default is TLS 1.2. If you try to enable TLS 1.0 in
/UserVars/ESXiVPsDisabledProtocols
by using ESXCLI commands, this leads to connection drops.This issue is resolved in this release.
Miscellaneous Issues
-
You might see wrong metadata for delete operation of a vSphere Lifecycle Manager depot that you use to create an image to manage a standalone ESXi host
Starting with vSphere 8.0, you can create an image based on a vSphere Lifecycle Manager depot, online or offline, to manage the lifecycle of any standalone ESXi host that is part of your vCenter Server inventory. In rare cases, when you delete such a depot, the metadata for the delete task might not be correct. For example, you see the hostid and hostIP details populated under clusterName and clusterID, such as:
clusterName = 10.161.153.136,
clusterId = host-54,
entityName = <null>,
entityId = <null>,
The issue also occurs when you delete a depot downloaded by using VMware vSphere Update Manager Download Service (UMDS), which is part of the desired state of a standalone host. The issue has no impact on any vSphere Lifecycle Manager operations and affects only task metadata for standalone hosts.
This issue is resolved in this release.
-
If you configure a VM at HW version earlier than 20 with a Vendor Device Group, such VMs might not work as expected
Vendor Device Groups, which enable binding of high-speed networking devices and the GPU, are supported only on VMs with HW version 20 and later, but you are not prevented to configure a VM at HW version earlier than 20 with a Vendor Device Group. Such VMs might not work as expected: for example, fail to power-on.
This issue is resolved in this release.
Server Configuration Issues
-
Changing an Input/Output Operations Per Second (IOPS) limit might cause a significant drop in the I/O throughput of virtual machines
When you change an IOPS limit based on the Storage I/O Control (SIOC) by using a Storage Policy Based Management (SPBM) policy, you might see significantly slower VM performance. Normally, when you set an SPBM policy, IOPS limits are handled by an I/O filter, while mClock, the default I/O scheduler, handles reservations and shares. Due to a logic fault, when you change an existing IOPS limit, I/Os might throttle at the mClock scheduler instead at the I/O filter. As a result, I/Os get with a significant delay to the I/O filter which causes a drop in the I/O throughput of virtual machines.
This issue is resolved in this release. The fix makes sure that IOPS limits are handled by the I/O filter, while mClock handles the reservations and shares. For more information, see VMware knowledge base article 89951.
Known Issues
Installation, Upgrade, and Migration Issues
-
Firmware compliance details are missing from a vSphere Lifecycle Manager image compliance report for an ESXi standalone host
Firmware compliance details might be missing from a vSphere Lifecycle Manager image compliance report for an ESXi standalone host in two cases:
-
You run a compliance report against a standalone host managed with a vSphere Lifecycle Manager image from vSphere Client and then navigate away before the compliance report gets generated.
-
You trigger a page refresh after the image compliance reports are generated.
In such cases, even when you have the firmware package available in the Desired State, the firmware compliance section remains empty when you revisit or refresh the vSphere Client browsing session. If you use GET image compliance API, then firmware compliance details are missing from the response.
Workaround: Invoke the image compliance scan for a standalone host managed with a vSphere Lifecycle Manager image by using the vSphere Client and do not navigate away or refresh the browser. For API, use the Check image compliance API for fetching the firmware details as apposed to GET image compliance.
-
-
Failed parallel remediation by using vSphere Lifecycle Manager on one ESXi host might cause other hosts to remain in a pending reboot state
An accidental loss of network connectivity during a parallel remediation by using vSphere Lifecycle Manager might cause the operation to fail on one of the ESXi hosts. Remediation on other hosts continues, but the hosts cannot reboot to complete the task.
Workaround: If an ESXi host consistently fails remediation attempts, manually trigger a reboot. For more information, see VMware knowledge base article 91260.
-
You see an error Failed to get ceip status in the Virtual Appliance Management Interface (VAMI) during update to vCenter Server 8.0 Update 1
During an update, vCenter stops and restarts the VMDir service and within this interval, if you try to log in to the VAMI, you might see an error such as
Failed to get ceip status
. This is expected and does not indicate an actual issue with the vCenter system.Workaround: Wait for the VMDir service to restart and refresh the Virtual Appliance Management Interface.
Miscellaneous Issues
-
In Hybrid Linked Mode, the cloud vCenter is not able to discover plug-ins deployed on an on-prem vCenter
Hybrid Linked Mode allows you to link your cloud vCenter Server instance with an on-premises vCenter Single Sign-On domain, but the cloud vCenter might not be able to discover plug-ins deployed on the on-prem instance because it does not have the necessary permissions.
Workaround: Install the vCenter Cloud Gateway in your on-premises environment and either browse the plug-ins deployed on the on-prem instance from the VMware Cloud Console or directly from the vSphere Client on the on-prem vCenter.
Networking Issues
-
Hot adding and removing of DirectPath I/O devices is not automatically enabled on virtual machines
With vSphere 8.0 Update 1, by using vSphere API you can add or remove a DirectPath I/O device without powering off VMs. When you enable the hotplug functionality that allows you to hot add and remove DirectPath I/O devices to a VM, if you use such a VM to create an OVF and deploy a new VM, the new VM might not have the hotplug functionality automatically enabled.
Workaround: Enable the hotplug functionality as described in Hot-add and Hot-remove support for VMDirectPath I/O Devices.
-
Overlapping hot-add and hot-remove operations for DirectPath I/O devices might fail
With vSphere 8.0 Update 1, by using vSphere API you can add or remove a DirectPath I/O device without powering off VMs. However, if you run several operations at the same time, some of the overlapping tasks might fail.
Workaround: Plan for 20 seconds processing time between each hot-add or hot-remove operation for DirectPath I/O devices.