Producent oprogramowania do wirtualizacji Vmware udostępnił następną aktualizacje dla Vmware vCenter Server o oznaczeniu 7.0 Update 1. Pewnie zastanawiacie się co nowego w tej aktualizacji, a więc ulepszono dostępność vSphere oraz pojawił się nowy pomysł twórców gdzie producent umożliwia przesyłanie swoich pomysłów za pomocą portalu vSphere Ideas. Najpopularniejsze z nich zostaną dodane do oprogramowania. Po więcej nowinek zapraszamy do dalszej części artykułu 🙂
Co nowego :
- Systemowe maszyny wirtualne dla vSphere Cluster Services
- Rozszerzona lista obsługiwanych wersji Red Hat Enterprise Linux i Ubuntu dla usługi pobierania VMware vSphere Update Manager
- Uwierzytelnianie SMTP
- Portal Ideas
- Przycisk Wycisz alerty w VMware Skyline Health
- Ulepszone wstępne testy zgodności sprzętowej vSphere Lifecycle Manager dla środowisk vSAN
Rozwiązane problemy:
vSphere Lifecycle Manager Issues
- While remediating a vSphere HA enabled cluster in the vSphere Lifecycle Manager, adding hosts causes a vSphere HA error stateAdding one or multiple ESXi hosts during a remediation process of a vSphere HA enabled cluster, results in the following error message:
Applying HA VIBs on the cluster encountered a failure.
This issue is resolved in this release. - Importing an image with no vendor addon, components, or firmware and drivers addon to a cluster which image contains such elements, does not remove the image elements of the existing imageOnly the ESXi base image is replaced with the one from the imported image.This issue is resolved in this release.
- ESXi 7.0 hosts cannot be added to а cluster that you manage with a single image by using vSphere Auto DeployAttempting to add ESXi hosts to а cluster that you manage with a single image by using the Add to Inventory workflow in vSphere Auto Deploy fails. The failure occurs because no patterns are matched in an existing Auto Deploy ruleset. The task fails silently and the hosts remain in the Discovered Hosts tab.This issue is resolved in this release.
vCenter Server and vSphere Client Issues
- Linked Software-Defined Data Center (SDDC) vCenter Server instances appear in the on-premises vSphere Client if a vCenter Cloud Gateway is linked to the SDDCWhen a vCenter Cloud Gateway is deployed in the same environment as an on-premises vCenter Server, and linked to an SDDC, the SDDC vCenter Server will appear in the on-premises vSphere Client. This is unexpected behavior and the linked SDDC vCenter Server should be ignored. All operations involving the linked SDDC vCenter Server should be performed on the vSphere Client running within the vCenter Cloud Gateway.This issue is resolved in this release.
Security Issues
- Update to the SQLite databaseThe SQLite database is updated to version 3.32.2.
- Update to the Apache Tomcat serverThe Apache Tomcat server is updated to version 8.5.55 / 9.0.35.
- Update to cURLcURL in the vCenter Server is updated to 7.70.0.
- Update to VMware PostgreSQLVMware PostgreSQL is updated to version 11.8.
- Update to OpenJDK 1.8.0.252Open-source JDK is updated to version 1.8.0.252.
- Update of the Jackson packageThe Jackson package is updated to versions 2.10.3.
- Upgrade of Eclipse JettyEclipse Jetty is upgraded to version 9.4.28.
- Update to the Spring FrameworkThe Spring Framework is updated to version 4.3.27 / 5.2.5.
Storage Issues
- Attempts to attach multiple CNS volumes to the same pod might occasionally fail with an errorWhen you attach multiple volumes to the same pod simultaneously, the attach operation might occasionally choose the same controller slot. As a result, only one of the operations succeeds, while other volume mounts fail.This issue is resolved in this release.
Znane Problemy:
Virtual Machine Management Issues
- You cannot add or modify an existing network adapter on a virtual machineIf you try to add or modify an existing network adapter on a virtual machine, the Reconfigure Virtual Machine task might fail with an error such as
Cannot complete operation due to concurrent modification by another operation
in the vSphere Client. In the/var/log/hostd.log
file of the ESXi host where the virtual machine runs, you see logs such as:
2020-07-28T07:47:31.621Z verbose hostd[2102259] [Originator@6876 sub=Vigor.Vmsvc.vm:/vmfs/volumes/vsan:526bc94351cf8f42-41153841cab2f9d9/bad71f5f-d85e-a276-4cf6-246e965d7154/interop_l2vpn_vmotion_VM_1.vmx] NIC: connection control message: Failed to connect virtual device 'ethernet0'.
In thevpxa.log
file, you see entries similar to:2020-07-28T07:47:31.941Z info vpxa[2101759] [Originator@6876 sub=Default opID=opId-59f15-19829-91-01-ed] [VpxLRO] -- ERROR task-138 -- vm-13 -- vim.VirtualMachine.reconfigure: vim.fault.GenericVmConfigFault:
Workaround: For each ESXi host in your cluster do the following:- Connect to the ESXi host by using SSH and run the command
esxcli system module parameters set -a -p dvfiltersMaxFilters=8192 -m dvfilter
- Put the ESXi host in Maintenance Mode.
- Reboot the ESXi host.
For more information, see VMware knowledge base article 80399.
- Connect to the ESXi host by using SSH and run the command
- ESXi 6.5 hosts with AMD Opteron Generation 3 (Greyhound) processors cannot join Enhanced vMotion Compatibility (EVC) AMD REV E or AMD REV F clusters on a vCenter Server 7.0 Update 1 systemIn vCenter Server 7.0 Update 1, vSphere cluster services, such as vSphere DRS and vSphere HA, run on ESX agent virtual machines to make the services functionally independent of vCenter Server. However, the CPU baseline for AMD processors of the ESX agent virtual machines have POPCNT SSE4A instructions, which prevents ESXi 6.5 hosts with AMD Opteron Generation 3 (Greyhound) processors to enable EVC mode AMD REV E and AMD REV F on a vCenter Server 7.0 Update 1 system.Workaround: None
Installation, Upgrade, and Migration Issues
- Patching to vCenter Server 7.0 Update 1 from earlier versions of vCenter Server 7.x is blocked when vCenter Server High Availability is enabledPatching to vCenter Server 7.0 Update 1 from earlier versions of vCenter Server 7.x is blocked when vCenter Server High Availability is active.Workaround: To patch your system to vCenter Server 7.0 Update 1 from earlier versions of vCenter Server 7.x, you must remove vCenter Server High Availability and delete the passive and witness nodes. After the upgrade, you must re-create your vCenter Server High Availability clusters.
- Migration of a 6.7.x vCenter Server system to vCenter Server 7.x fails with an UnicodeEncodeErrorIf you select the option to import all data for configuration, inventory, tasks, events, and performance metrics, the migration of a 6.7.x vCenter Server system to vCenter Server 7.x might fail for any vCenter Server system that uses a non-English locale. At step 1 of stage 2 of the migration, in the vSphere Client, you see an error such as:
Error while exporting events and tasks data: …ERROR UnicodeEncodeError: Traceback (most recent call last):
Workaround: You can complete the migration operation by doing either:- Select the default option Configuration and Inventory at the end of stage 1 of the migration.
This option does not include tasks and events data. - Clean the data in the events tables and run the migration again.
- Select the default option Configuration and Inventory at the end of stage 1 of the migration.
- If a Windows vCenter Server system has a database password containing non-ASCII characters, pre-checks of the VMware Migration Assistant failIf you try to migrate a 6.x vCenter Server system to vCenter Server 7.x by using the VMware Migration Assistant, and your system has a Windows OS, and uses an external database with a password containing non-ASCII characters, the operation fails. For example, Admin!23迁移. In the Migration Assistant console, you see the following error:
Error:Component com.vmware.vcdb failed with internal error
Workaround: None
Resolution:File Bugzilla PR to VPX/VPX/vcdb-upgrade - During an update from vCenter Server 7.x to vCenter Server 7.0 Update 1, you get prompts to provide the vCenter Single Sign-On passwordDuring an update from vCenter Server 7.x to vCenter Server 7.0 Update 1, you get prompts to provide vCenter Single Sign-On administrator password.Workaround: If you run the update by using the vCenter Server Management Interface, you must provide the vCenter Single Sign-On administrator password.
If you run the update by using software-packages or CLI in an interactive manner, you must interactively provide the vCenter Single Sign-On administrator password.
If you run the update by using software-packages or CLI in a non-interactive manner, you must provide the vCenter Single Sign-On administrator password by an answer file in the format
{ "vmdir.password": "SSO Password of Administrator@<SSO-DOMAIN> user" }
- You might not be able to apply or remove NSX while you add ESXi hosts by using a vSphere Lifecycle Manager image to a cluster with enabled VMware vSphere High AvailabilityIf you start an operation to apply or remove NSX while adding multiple ESXi hosts by using a vSphere Lifecycle Manager image to a vSphere HA-enabled cluster, the NSX-related operations might fail with an error in the vSphere Client such as:
vSphere HA agent on some of the hosts on cluster <cluster_name> is neither vSphere HA master agent nor connected to vSphere HA master agent. Verify that the HA configuration is correct.
The issue occurs because vSphere Lifecycle Manager configures vSphere HA for the ESXi hosts being added to the cluster one at a time. If you run an operation to apply or remove NSX while vSphere HA configure operations are still in progress, NSX operations might queue up between the vSphere HA configure operations for two different ESXi hosts. In such a case, the NSX operation fails with a cluster health check error, because the state of the cluster at that point does not match the expected state that all ESXi hosts have vSphere HA configured and running. The more ESXi hosts you add to a cluster at the same time, the more likely the issue is to occur.Workaround: Disable and enable Sphere HA on the cluster. Proceed with the operations to apply or remove NSX. - After an upgrade of a vCenter Server 7.0 system, you cannot see the IP addresses of pods in the vSphere Pod Summary tab of the vSphere ClientIf you upgrade your vCenter Server 7.0 system to a later version, you can no longer see the IP addresses of pods in the vSphere Pod Summary tab of the vSphere Client.Workaround: Use the Kubernetes CLI Tools for vSphere to review details of pods:
- As a prerequisite, copy the pod and namespace names.
- In the vSphere Client, navigate to Workload Management > Clusters.
- Copy the IP displayed in the Control Plane Node IP Address tab.
- You can navigate to
https://<control_plane_node_IP_address>
and download the Kubernetes CLI Tools,kubectl
andkubectl-vsphere
.
Alternatively, follow the steps in Download and Install the Kubernetes CLI Tools for vSphere.
- Use the CLI plug-in for vSphere to review the pod details.
- Log in to the Supervisor cluster by using the command
kubectl vsphere login --server=https://<server_adress> --vsphere-username <your user account name> --insecure-skip-tls-verify
- By using the names copied in step 1, run the commands for retrieving the pod details:
kubectl config use-context <namespace_name>
and
kubectl describe pod <pod_name> -n <namespace_name>
- Log in to the Supervisor cluster by using the command
As a result, you can see the IP address in an output similar to:
$ kubectl describe pod helloworld -n my-podvm-ns ...
Status: Running
IP: 10.0.0.10
IPs:
IP: 10.0.0.10 ...
- As a prerequisite, copy the pod and namespace names.
- Deployment of a vCenter Server Appliance by using port 5480 at stage 2 fails with unable to save IP settings errorIf you use
https://appliance-IP-address-or-FQDN:5480
in a Web browser, go to the vCenter Server Appliance Management Interface for stage 2 of a newly deployed vCenter Server Appliance, and you configure a static IP or try to change the IP configuration, you see an error such as
Unable to save IP settings
.Workaround: None.
Backup Issues
- If you use the NFS and SMB protocols for file-based backup of vCenter Server, the backup fails after an update from vCenter Server 7.x to vCenter Server 7.0 Update 1If you use the Network File System (NFS) and Server Message Block (SMB) protocols for file-based backup of vCenter Server, the backup fails after an update from an earlier version of vCenter Server 7.x to vCenter Server 7.0 Update 1. In the
applmgmt.log
, you see an error message such asFailed to mount the remote storage
. The issue occurs because of Linux kernel updates that run during the patch process. The issue does not occur on fresh installations of vCenter Server 7.0 Update 1.Workaround: Reboot the vCenter Server appliance after the update is complete.
vSphere Lifecycle Manager Issues
- If you use a Java client to review remediation tasks, you cannot extract the results from the remediation operationsIf you use a Java client to review remediation tasks, extracting the results might fail with a
ConstraintValidationException
error. The issue occurs when an ESXi host fails to enter maintenance mode during the remediation and gets a status SKIPPED, but at the same time wrongly gets an In Progress flag for the consecutive remediation operations. This causes theConstraintValidationException
error on the Java Clients and you cannot extract the result of the remediation operation.Workaround: Fix the underlying issues that prevent ESXi hosts to enter Maintenance Mode and retry the remediation operation. - The general vSphere Lifecycle Manager depot and local depots in Remote Office and Branch Office (ROBO) deployments might not be in syncROBO clusters that have limited or no access to the Internet or limited connectivity to vCenter Server can download an image from a depot that is local for them instead of accessing the vSphere Lifecycle Manager depot in vCenter Server. However, vSphere Lifecycle Manager generates software recommendations in the form of pre-validated images only on a central level and a recommended image content might not be available at a depot override.Workaround: If you decide to use a recommended image, make sure the content between depot overrides and the central depot are in sync.
- Cluster remediation by using the vSphere Lifecycle Manager might fail on ESXi hosts with enabled lockdown modeIf a cluster has ESXi hosts with enabled lockdown mode, remediation operations by using the vSphere Lifecycle Manager might skip such hosts. In the log files, you see messages such as
Host scan task failed
andcom.vmware.vcIntegrity.lifecycle.EsxImage.UnknownError An unknown error occurred while performing the operation.
.Workaround: Add the root user to the exception list for lockdown mode and retry the cluster remediation.
Networking Issues
- If you try to disable vSphere with Tanzu on a vSphere cluster, the operation stops with an errorIf some virtual machines outside of a Supervisor Cluster reside on any of the NSX segment port groups on the cluster, the cleanup script cannot delete such ports and disable vSphere with Tanzu on the cluster. In the vSphere Client, you see the error
Cleanup requests to NSX Manager failed
and the operation stops atRemoving
status. In the/var/log/vmware/wcp/wcpsvc.log
file, you see an error message such as
Segment path=[...] has x VMs or VIFs attached. Disconnect all VMs and VIFs before deleting a segment.
Workaround: Delete the virtual machines indicated in the/var/log/vmware/wcp/wcpsvc.log
file from the segment. Wait for the operation to restore. - After upgrading to NSX 6.4.7, when a static IPv6 address is assigned to workload VMs on an IPv6 network, the VMs are unable to ping the IPv6 gateway interface of the edgeThis issue occurs after upgrading the vSphere Distributed Switches from 6.x to 7.0.Workaround 1:Select the VDS where all the hosts are connected, go to the Edit setting, and under Multicast option switch to basic.Workaround 2:
Add the following rules on the edge firewall:
Ping allow rule.
Multicast Listener Discover (MLD) allow rule, which are icmp6, type 130 (v1) and type 143 (v2).
vSAN Issues
- Virtual machines lose connectivity due to a network outage in the preferred siteIn a vSAN stretched cluster setup, a network outage in the preferred site might cause inaccessibility of all virtual machines in the site. The virtual machines do not fail over to a secondary site. They remain inaccessible until the network outage is restored.Workaround: None.
vSphere Cluster Services Issues
- If all vSphere Cluster Service agent virtual machines in a cluster are down, vSphere DRS does not function in the cluster If vSphere Cluster Service agent virtual machines fail to deploy or power on in a cluster, services such as vSphere DRS might be impacted.Workaround: For more information on the issue and workarounds, see VMware knowledge base article 79892.
- System virtual machines that support vSphere Cluster Services might impact cluster and datastore maintenance workflowsIn vCenter Server 7.0 Update 1, vSphere Cluster Services adds a set of system virtual machines in every vSphere cluster to ensure the healthy operation of vSphere DRS. The system virtual machines deploy automatically with an implicit datastore selection logic. Depending on your cluster configuration, the system virtual machines might impact some of the cluster and datastore maintenance workflows.
Zapraszam do notatek producenta: VMware vCenter Server 7.0 Update 1
Pozdrawiamy,
Zespół B&B
Bezpieczeństwo w biznesie