Producent oprogramowania do wirtualizacji VMware opublikował nową wersję produktu vSphere (ESXi) 7.0.3 o oznaczeniu d. To już czwarte wydanie tego produktu w wersji 7.0.3. W najnowszej wersji pojawiła się znaczna poprawa bezpieczeństwa, która załatała wiele luk w zabezpieczeniach oraz podatności poprzednich wersji produktu. Naprawiono problem z wygaszaniem maszyn, gdy VMkernel napotykał na problem z zegarem vCPU oraz rozwiązano błędne działanie replikacji vSphere – gdy była włączona na maszynie wirtualnej, wiele innych VM mogło nie odpowiadać. Po więcej ciekawych informacji zapraszamy do dalszej części artykułu.
Ważne informacje:
- Starting with vSphere 7.0, VMware uses components for packaging VIBs along with bulletins. The
ESXi
andesx-update
bulletins are dependent on each other. Always include both in a single ESXi host patch baseline or include the rollup bulletin in the baseline to avoid failure during host patching. - When patching ESXi hosts by using VMware Update Manager from a version prior to ESXi 7.0 Update 2, it is strongly recommended to use the rollup bulletin in the patch baseline. If you cannot use the rollup bulletin, be sure to include all of the following packages in the patching baseline. If the following packages are not included in the baseline, the update operation fails:
- VMware-vmkusb_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkata_0.1-1vmw.701.0.0.16850804 or higher
- VMware-vmkfcoe_1.0.0.2-1vmw.701.0.0.16850804 or higher
- VMware-NVMeoF-RDMA_1.0.1.2-1vmw.701.0.0.16850804 or higher
- ESXi 7.0 Update 3d provides the following security updates:
- cURL is updated to version 7.79.1.
- The Python third party library is updated to updated to resolve CVE-2021-29921.
- The GNU C Library (glibc) library is updated to resolve following CVEs: CVE-2015-5180, CVE-2015-8777, CVE-2015-8982, CVE-2016-10739, CVE-2016-3706, CVE-2017-1000366, CVE-2018-1000001, CVE-2018-19591, CVE-2019-19126, and CVE-2020-10029.
- OpenSSH is updated to version 8.8p1.
- OpenSSL is updated to version 1.0.2zb.
Co nowego:
- ESXi 7.0 Update 3d supports vSphere Quick Boot on the following servers:
- Dell Inc. C6420 vSAN Ready Node
- Dell Inc. MX740C vSAN Ready Node
- Dell Inc. MX750C vSAN Ready Node
- Dell Inc. PowerEdge R750xa
- Dell Inc. PowerEdge R750xs
- Dell Inc. PowerEdge T550
- Dell Inc. R650 vSAN Ready Node
- Dell Inc. R6515 vSAN Ready Node
- Dell Inc. R740 vSAN Ready Node
- Dell Inc. R750 vSAN Ready Node
- Dell Inc. R7515 vSAN Ready Node
- Dell Inc. R840 vSAN Ready Node
- Dell Inc. VxRail E660
- Dell Inc. VxRail E660F
- Dell Inc. VxRail E660N
- Dell Inc. VxRail E665
- Dell Inc. VxRail E665F
- Dell Inc. VxRail E665N
- Dell Inc. VxRail G560
- Dell Inc. VxRail G560F
- Dell Inc. VxRail P580N
- Dell Inc. VxRail P670F
- Dell Inc. VxRail P670N
- Dell Inc. VxRail P675F
- Dell Inc. VxRail P675N
- Dell Inc. VxRail S670
- Dell Inc. VxRail V670F
Rozwiązane problemy:
- PR 2855241: Adding ESXi hosts to an Active Directory domain might take longSome LDAP queries that have no specified timeouts might cause a significant delay in domain join operations for adding an ESXi host to an Active Directory domain.This issue is resolved in this release. The fix adds a 15 seconds standard timeout with additional logging around the LDAP calls during domain join workflow.
- PR 2834582: Concurrent power on of a large number of virtual machines might take long or failIn certain environments, concurrent power on of a large number of VMs hosted on the same VMFS6 datastore might take long or fail. Time to create swap files for all VMs causes delays and might ultimately cause failure of the power on operations.This issue is resolved in this release. The fix enhances the VMFS6 resource allocation algorithm to prevent the issue.
- PR 2851811: Virtual machines might stop to respond during power on or snapshot consolidation operationsA virtual machine might stop responding during a power on or snapshot consolidation operation and you must reboot the ESXi host to restart the VM. The issue is rare and occurs while opening the VMDK file.This issue is resolved in this release. However, the fix resolves an identified root cause and might not resolve all aspects of the issue. The fix adds logs with the tag
AFF_OPEN_PATH
to facilitate identifying and resolving an alternative root cause if you face the issue. - PR 2865369: An ESXi host might fail with a purple diagnostic screen due to a very rare race condition in software iSCSI adaptersIn very rare cases, a race condition in software iSCSI adapters might cause the ESXi host to fail with a purple diagnostic screen.This issue is resolved in this release.
- PR 2849843: Virtual machine storage panel statistics shows VMname as null on ESXi servers running for more than 150 daysIf an ESXi server runs for more than 150 days without restart, the resource pool ID (GID) number might overflow
uint32
. As a result, in the VM storage panel statistics pane you might seeGID
as a negative number andVMname
as null.This issue is resolved in this release. The fix changes theGID
variable touint64
. - PR 2850065: The VMkernel might shut down virtual machines due to a vCPU timer issueIn rare occasions, the VMkernel might consider a virtual machine unresponsive, because it fails to send properly PCPU heartbeat, and shut the VM down. In the
vmkernel.log
file, you see messages such as:
2021-05-28T21:39:59.895Z cpu68:1001449770)ALERT: Heartbeat: HandleLockup:827: PCPU 8 didn't have a heartbeat for 5 seconds, timeout is 14, 1 IPIs sent; *may* be locked up.
The issue is due to a rare race condition in vCPU timers. Because the race is per-vCPU, larger VMs are more exposed to the issue.This issue is resolved in this release.
2021-05-28T21:39:59.895Z cpu8:1001449713)WARNING: World: vm 1001449713: PanicWork:8430: vmm3:VM_NAME:vcpu-3:Received VMkernel NMI IPI, possible CPU lockup while executing HV VT VM - PR 2851400: When vSphere Replication is enabled on a virtual machine, many other VMs might become unresponsiveWhen vSphere Replication is enabled on a virtual machine, you might see higher datastore and in-guest latencies that in certain cases might lead to ESXi hosts becoming unresponsive to vCenter Server. The increased latency comes from vSphere Replication computing MD5 checksums on the I/O completion path, which delays all other I/Os.This issue is resolved in this release. The fix offloads the vSphere Replication MD5 calculation from the I/O completion path to a work pool and reduces the amount of outstanding I/O that vSphere Replication issues.
- PR 2859882: Network interface entries republished with old formatAfter a reboot, the lead host of a vSAN cluster might have a new format for network interface entries. The new format might not propagate to some entries. For example, interface entries in the local update queue of the lead host.This issue is resolved in this release.
- PR 2861109: ESXi hosts might fail with a purple diagnostic screen during shutdown due to stale metadataIn rare cases, when you delete a large component in an ESXi host, followed by a reboot, the reboot might start before all metadata of the component gets deleted. The stale metadata might cause the ESXi host to fail with a purple diagnostic screen.This issue is resolved in this release. The fix makes sure no pending metadata remains before a reboot of ESXi hosts.
- PR 2859643: You see sfcb core dumps during planned removal of NVMe devicesTo optimize the processing of queries related to PCI devices, SFCB maintains a list of the PCI devices in a cache. However, when you remove an NVMe device, even with a planned workflow, the cache might not get refreshed. As a result, you see sfcb core dumps since the lookup for the removed device fails.This issue is resolved in this release. The fix makes sure that SFCB refreshes the cache on any change in the PCI devices list.
- PR 2875575: After upgrading to ESXi 7.0 Update 2d and later, you see an NTP time sync errorIn some environments, after upgrading to ESXi 7.0 Update 2d and later, in the vSphere Client you might see the error
Host has lost time synchronization
. However, the alarm might not indicate an actual issue.This issue is resolved in this release. The fix replaces the error message with a log function for backtracing and prevents false alarms. - PR 2851531: ESXi hosts in environments using uplink and teaming policies might lose connectivity after remediation by applying a host profileWhen you remediate ESXi hosts by using a host profile, network settings might fail to apply due to a logic fault in the check of the uplink number of uplink ports that are configured for the default teaming policy. If the uplink number check returns 0 while applying a host profile, the task fails. As a result, ESXi hosts lose connectivity after reboot.This issue is resolved in this release. The fix refines the uplink number check and makes sure it returns error only in specific conditions.
- PR 2869790: You do not see VMkernel network adapters after ESXi hosts rebootIn vSphere systems where VMkernel network adapters are connected to multiple TCP/IP stacks, after ESX hosts reboot, some of the adapters might not be restored. In the vSphere client, when you navigate to Host > Configure > VMkernel Adapters, you see a message such as
No items found
. If you run the ESXCLI commandslocalcli network ip interface list
oresxcfg-vmknic -l
, you see the errorUnable to get node: Not Found
. In thehostd.log
reports, you see the same error.This issue is resolved in this release. - PR 2846290: ESXi hosts with virtual machines with Latency Sensitivity enabled might randomly become unresponsive due to CPU starvationWhen you enable Latency Sensitivity on virtual machines, some threads of the Likewise Service Manager (lwsmd), which sets CPU affinity explicitly, might compete for CPU resources on such virtual machines. As a result, you might see the ESXi host and the hostd service to become unresponsive.This issue is resolved in this release. The fix makes sure lwsmd does not set CPU affinity explicitly.
- PR 2871577: If you disable and then re-enable migration operations by using vSphere vMotion, consecutive migrations might cause ESXi hosts to fail with a purple diagnostic screenIn specific cases, if you use vSphere vMotion on a vSphere system after disabling and re-enabling migration operations, ESXi hosts might fail with a purple diagnostic screen. For example, if you use the ESXCLI commands
esxcli system settings advanced set --option /Migrate/Enabled --int-value=0
to disable the feature and then runesxcli system settings advanced set --option /Migrate/Enabled --default
to enable it, any migration after that might cause the issue.This issue is resolved in this release. - PR 2914095: High CPU utilization after a non-disruptive upgrade (NDU) of a storage array firmwareESXi hosts might experience more than 90% CPU utilization after an NDU upgrade of a storage array firmware, for example a PowerStore VASA provider. The high CPU usage eventually settles down but, in some cases, might take long, even more than 24 hours.This issue is resolved in this release.
- PR 2871515: ESXi hosts lose IPv6 DNS after a VMkernel port migrationAfter a VMkernel port migration from a standard virtual switch to a vSphere Distributed Virtual Switch (VDS) or from one VDS to another, ESXi hosts might lose their IPv6 DNS.This issue is resolved in this release. The fix makes sure that during a VMkernel port migration IPv6 nameservers are added or removed one at a time to avoid removing them all in certain environments.
- PR 2852173: ESXi hosts might fail with a purple diagnostic screen due to insufficient socket buffer spaceESXi management daemons that generate high volumes of log messages might impact operational communication between user-level components due to insufficient socket buffer space. As a result, ESXi hosts might fail with a purple diagnostic screen with a message such as
nicmgmtd: Cannot allocate a new data segment, out of memory
.This issue is resolved in this release. The fix allocates low level socket space shared between components separately from application buffer space. - PR 2872509: You cannot see the status of objects during a Resyncing objects taskIn the vSphere Client, when you select Resyncing objects under a vSAN cluster > Monitor > vSAN, you do not see the status of objects that are being resynchronized. Instead, you see the error
Failed to extract requested data. Check vSphere Client logs for details
.This issue is resolved in this release. - PR 2878701: In the VMware Host Client, you see an error that no sensor data is availableIn the VMware Host Client, you see the error
No sensor data available
due to an issue with theDateTime
formatting. In the backtrace, you see logs such as:
hostd[1051205] [Originator@6876 sub=Cimsvc] Refresh hardware status failed N7Vmacore23DateTimeFormatExceptionE(Error formatting DateTime)
.This issue is resolved in this release. - PR 2847291: Management accounts of VMware Cloud Director on Dell EMC VxRail might be deleted during host profile remediationWhen you create a host profile, service accounts for VMware Cloud Director on Dell EMC VxRail are automatically created and might be deleted during a remediation of the host profile.This issue is resolved in this release. The fix makes sure that service accounts for VMware Cloud Director on Dell EMC VxRail do not depend on host profile operations.
- PR 2884344: vSAN host configuration does not match vCenter Server when a native key provider is unhealthyWhen the status of a native key provider for vSAN encryption is unhealthy, the remediation workflow might be blocked. vCenter Server cannot synchronize its configuration settings with the vSAN hosts until the block is cleared.This issue is resolved in this release.
- PR 2854558: Cannot enable vSAN encryption by using a native key provider when hosts are behind a proxyWhen you place vSAN hosts behind a proxy server, vSAN cannot determine the health of the native key provider. As a result, you cannot enable vSAN encryption by using the native key provider. You might see the following message:
Key provider is not available on host.
This issue is resolved in this release. - PR 2859229: You see compliance check error for hosts in a vSAN HCI Mesh clusterHosts in a vSAN cluster with HCI Mesh enabled might experience the following compliance check error:
Unable to gather datastore name from Host.
This issue is resolved in this release. - PR 2840405: You cannot change the resource pool size of WBEM providersNames of WBEM providers can be different from their resource group name. In such cases, commands such as
esxcli system wbem set --rp-override
fail to change the existing configuration, because the method to change a resource pool size also checks the resource group name.This issue is resolved in this release. The fix removes the check between WBEM provider names and resource group names. - PR 2897700: If data in transit encryption is enabled on a vSAN cluster, ESXi hosts might fail with a purple diagnostic screenIf data in transit encryption is enabled on a vSAN cluster and other system traffic types, such as vSphere vMotion traffic or vSphere HA traffic, routes to a port used by vSAN, ESXi hosts might fail with an error such as
PSOD: #PF Exception 14 in world 1000215083:rdtNetworkWo IP 0x42000df1be47 addr 0x80
on a purple diagnostic screen.This issue is resolved in this release. - PR 2925847: After ESXi update to 7.0 Update 3 or later, the VPXA service fails to start and ESXi hosts disconnect from vCenter ServerAfter updating ESXi to 7.0 Update 3 or later, hosts might disconnect from vCenter Server and when you try to reconnect a host by using the vSphere Client, you see an error such as
A general system error occurred: Timed out waiting for vpxa to start..
The VPXA service also fails to start when you use the command/etc/init.d/vpxa start
. The issue affects environments with RAIDs that contain more than 15 physical devices. Thelsuv2-lsiv2-drivers-plugin
can manage up to 15 physical disks and RAIDs with more devices cause an overflow that prevents VPXA from starting.This issue is resolved in this release.
Znane problemy:
- SSH access fails after you upgrade to ESXi 7.0 Update 3dAfter you upgrade to ESXi 7.0 Update 3d, SSH access might fail in certain conditions due to an update of OpenSSH to version 8.8.Workaround: For more information, see VMware knowledge base article 88055.
Notatki producenta: VMware ESXi 7.0 Update 3d
Pozdrawiamy,
Zespół B&B
Bezpieczeństwo w biznesie