Producent oprogramowania do wirtualizacji VMware opublikował właśnie kolejną wersję oprogramowania VMware vSphere Hypervisor (ESXi) 8.0U2. Dzięki aktualizacji, wprowadzono ulepszenia wydajności w harmonogramowaniu CPU w ESXi, szczególnie na nowoczesnych systemach z wyższą liczbą rdzeni, takich jak Intel Sapphire Rapids z maksymalnie 60 rdzeniami na gniazdo. Ponadto, ESXi 8.0 Update 2 wprowadza obsługę technologii Quick Boot dla kilku serwerów od różnych producentów, w tym Dell, VxRail VD-4510c, VxRail VD-4520c, Fujitsu PRIMERGY RX2540 M7, PRIMERGY RX2530 M7 oraz Lenovo ThinkSystem SR635 V3, ThinkSystem SR645 V3, ThinkSystem SR655 V3 i ThinkSystem ST650 V3. Po więcej informacji, zapraszam do dalszej części artykułu.
Nowości:
-
With vSphere 8.0 Update 2, vSphere Distributed Services Engine adds support for:
-
NVIDIA BlueField-2 DPUs to server designs from Fujitsu (Intel Sapphire Rapids).
-
-
ESXi 8.0 Update 2 adds support to vSphere Quick Boot for multiple servers, including:
-
Dell
-
VxRail VD-4510c
-
VxRail VD-4520c
-
-
Fujitsu
-
PRIMERGY RX2540 M7
-
PRIMERGY RX2530 M7
-
-
Lenovo
-
ThinkSystem SR635 V3
-
ThinkSystem SR645 V3
-
ThinkSystem SR655 V3
-
ThinkSystem ST650 V3
For the full list of supported servers, see the VMware Compatibility Guide.
-
-
-
In-Band Error-Correcting Code (IB ECC) support: With vSphere 8.0 Update 2, you can use IB ECC on hardware platforms that support this option to perform data integrity checks without the need for actual ECC type DDR memory.
-
Support for Graphics and AI/ML workloads on Intel ATS-M: vSphere 8.0 Update 2 adds support for graphics and AI/ML workloads on Intel ATS-M.
-
Enhanced ESXi CPU Scheduler: vSphere 8.0 Update 2 adds performance enhancements in the ESXi CPU scheduler for newer generation systems that use high core count CPUs, such as Intel Sapphire Rapids with up to 60 cores per socket.
-
Broadcom lpfc driver update: With vSphere 8.0 Update 2, the lpfc driver can generate and provide information about Fibre Channel Port Identifier (FPIN) reports.
-
Mellanox (nmlx5) driver update: With vSphere 8.0 Update 2, the nmlx5 driver supports NVIDIA ConnectX-7 SmartNIC and improves performance, such as support for up to 8 Mellanox uplinks per ESXi host, offload support for up to 200G speeds, hardware offloads for inner IPv4/IPv6 checksum offloads (CSO), TCP Segment Offloads (TSO) with outer IPv6 for both GENEVE and VXLAN, and enabling NetQ Receive-Side Scaling (RSS).
-
Marvell (qedentv) driver update: With vSphere 8.0 Update 2, the qedentv driver supports NetQ Receive-Side Scaling (RSS) and Hardware Large Receive Offload (HW LRO) to enhance performance, scalability, and efficiency.
-
Other driver updates:
-
Broadcom bcm_mpi3 – bug fixes
-
Broadcom bnxtnet – accumulated bnxtnet driver updates from async driver, including queueGetStats callback of vmk_UplinkQueueOps,enhanced debugging
-
Broadcom lsi_mr3 – routine update
-
Intel icen – enables the RSS feature in native mode for the icen NIC device driver
-
Microchip smartpqi – bug fixes and addition of OEM branded PCI IDs
-
Pensando ionic_cloud: support for cloud vendors
-
-
IPv6 driver enhancements: With vSphere 8.0 Update 2, ESXi drivers add offload capabilities for performance improvements with IPv6 when used as an overlay.
-
Uniform Passthrough (UPT) mode support in the nvmxnet3 driver: vSphere 8.0 Update 2 adds support for UPT to allow faster vSphere vMotion operations in nested ESXi environments.
-
Support 8 ports of 100G on a single host with both Broadcom and Mellanox NICs: vSphere 8.0 Update 2 increases support for 100GB NIC ports from 4 to 8 in ESXi for Broadcom and Mellanox.
-
CIM Services Tickets for REST Authentication: In addition to the JWT-based authentication, vSphere 8.0 Update 2 adds the option to authenticate with ESXi host by using CIM services tickets with the
acquireCimServicesTicket()
API for SREST plug-ins. -
glibc library update: The glibc library is updated to version 2.28 to align with NIAP requirements.
Guest platform for workloads:
-
Virtual hardware version 21: vSphere 8.0 Update 2 introduces virtual hardware version 21 to enhance latest guest operating system support and increase maximums for vGPU and vNVME as follows:
-
16 vGPU devices per VM (see Configure Virtual Graphics on vSphere)
-
256 vNVMe disks per VM (64 x 4 vNVMe adapters)
-
NVMe 1.3 support for Windows 11 and Windows Server 2022
-
NVMe support for Windows Server Failover Clustering (WSFC)
-
-
Hot-extend a shared vSphere Virtual Volumes Disk: vSphere 8.0 Update 2 supports hot extension of shared vSphere Virtual Volumes disks, which allows you to increase the size of a shared disk without deactivating the cluster, and effectively no downtime, and is helpful for VM clustering solutions such as Windows Server Failover Cluster (WSFC). For more information, see VMware vSphere Virtual Volumes Support for WSFC and You Can Hot Extend a Shared vVol Disk.
-
Use vNVME controller for WSFC: With vSphere 8.0 Update 2, you can use NVMe controller in addition to existing Paravirtual controller for WSFC with Clustered VMDK for Windows Server 2022 (OS Build 20348.1547) and later. To use NVMe Controller virtual machine hardware version must be 21 or later. For more information, see Setup for Windows Server Failover Clustering.
-
USB 3.2 Support in virtual eXtensible Host Controller Interface (xHCI): With vSphere 8.0 Update 2, the virtual xHCI controller is 20 Gbps compatible.
-
Read-only mode for attached virtual disks: With vSphere 8.0 Update 2, you can attach a virtual disk as read-only to a virtual machine to avoid temporary redo logs and improve performance for use cases such as VMware App Volumes.
-
Support VM clone when a First Class Disk (FCD) is attached: With vSphere 8.0 Update 2, you can use the
cloneVM()
API to clone a VM with a FCD attached.
GPU
-
GPU Driver VM for any passthrough GPU card: With vSphere 8.0 Update 2, a GPU Driver VM facilitates support of new GPU vendors for the virtual SVGA device (vSGA).
Storage
-
Support for multiple TCP connections on a single NFS v3 volume: With vSphere 8.0 Update 2, the capability that allows support of multiple TCP connections for a single NFS v3 volume by using ESXCLI, nConnect, becomes fully supported for on-prem environments. For more information, see VMware knowledge base articles 91497 and 91479.
-
ESXCLI support for SCSI UNMAP operations for vSphere Virtual Volumes: Starting with vSphere 8.0 Update 2, you can use command line, ESXCLI, for SCSI UNMAP operations for vSphere Virtual Volumes.
Rozwiązane problemy:
Installation, Upgrade and Migration Issues
-
VMware NSX installation or upgrade in a vSphere environment with DPUs might fail with a connectivity error
An intermittent timing issue on the ESXi host side might cause NSX installation or upgrade in a vSphere environment with DPUs to fail. In the
nsxapi.log
file you see logs such asFailed to get SFHC response. MessageType MT_SOFTWARE_STATUS
.
Miscellaneous Issues
-
You cannot mount an IPv6-based NFS 3 datastore with VMkernel port binding by using ESXCLI commands
When you try to mount an NFS 3 datastore with an IPv6 server address and VMkernel port binding by using an ESXCLI command, the task fails with an error such as:
[:~] esxcli storage nfs add -I fc00:xxx:xxx:xx::xxx:vmk1 -s share1 -v volume1
Validation of vmknic failed Instance(defaultTcpipStack, xxx:xxx:xx::xxx:vmk1) Input(): Not found:
The issue is specific for NFS 3 datastores with an IPv6 server address and VMkernel port binding.
-
If a PCI passthrough is active on a DPU during the shutdown or restart of an ESXi host, the host fails with a purple diagnostic screen
If an active virtual machine has a PCI passthrough to a DPU at the time of shutdown or reboot of an ESXi host, the host fails with a purple diagnostic screen. The issue is specific for systems with DPUs and only in case of VMs that use PCI passthrough to the DPU.
-
If you configure a VM at HW version earlier than 20 with a Vendor Device Group, such VMs might not work as expected
Vendor Device Groups, which enable binding of high-speed networking devices and the GPU, are supported only on VMs with HW version 20 and later, but you are not prevented to configure a VM at HW version earlier than 20 with a Vendor Device Group. Such VMs might not work as expected: for example, fail to power-on.
Networking Issues
-
ESXi reboot takes long due to NFS server mount timeout
When you have multiple mounts on an NFS server that is not accessible, ESXi retries connection to each mount for 30 seconds, which might add up to minutes of ESXi reboot delay, depending on the number of mounts.
-
Auto discovery of NVMe Discovery Service might fail on ESXi hosts with NVMe/TCP configurations
vSphere 8.0 adds advanced NVMe-oF Discovery Service support in ESXi that enables the dynamic discovery of standards-compliant NVMe Discovery Service. ESXi uses the mDNS/DNS-SD service to obtain information such as IP address and port number of active NVMe-oF discovery services on the network. However, in ESXi servers with NVMe/TCP enabled, the auto discovery on networks configured to use vSphere Distributed Switch might fail. The issue does not affect NVMe/TCP configurations that use standard switches.
-
If you do not reboot an ESXi host after you enable or disable SR-IOV with the icen driver, when you configure a transport node in ENS Interrupt mode on that host, some virtual machines might not get DHCP addresses
If you enable or disable SR-IOV with the
icen
driver on an ESXi host and configure a transport node in ENS Interrupt mode, some Rx (receive) queues might not work if you do not reboot the host. As a result, some virtual machines might not get DHCP addresses.
vSphere Lifecycle Manager Issues
-
If you use an ESXi host deployed from a host profile with enabled stateful install as an image to deploy other ESXi hosts in a cluster, the operation fails
If you extract an image of an ESXi host deployed from a host profile with enabled stateful install to deploy other ESXi hosts in a vSphere Lifecycle Manager cluster, the operation fails. In the vSphere Client, you see an error such as
A general system error occurred: Failed to extract image from the host: no stored copy available for inactive VIB VMW_bootbank_xxx. Extraction of image from host xxx.eng.vmware.com failed
.