VMware vSphere is the base solution on which most private cloud datacenters are running on. As VMware defines, vSphere 8 is the enterprise workload platform that brings the benefits of the cloud to on-premises workloads, supercharges performance through DPUs and GPUs, and accelerates innovation with an enterprise-ready integrated Kubernetes runtime.

In this post, I want to introduce the new and unique features that I found useful and interesting in vSphere 8.0!

First, is the vSphere Distributed Service Engine which I am pretty sure most of you have heard about it with the title Project Monterey. Two years ago, “Project Monterey” got announced, and the idea was to leverage Data Process Unit, or DPU for short. You may have also heard it with another name, “Smart NICs”. And the idea is to accelerate data processing by utilizing DPUs!

DPU, like any other PCI device, will be placed in the hardware layer and process data while moving around the data center! But you may ask what is the relevance to vSphere 8?

Well, vSphere 8.0 is a ready infrastructure to utilize DPU much easier!
With having DPUs you basically install the second ESXi directly on the DPU which leads to offloading the vSphere services. In addition, other solutions like NSX and vSAN will use it to accelerate the performance because they free up resources on the compute hypervisor!

Data Processing Unit (VMware 2022)

Having said that, the first service in vSphere 8 that will be offloaded to DPU is the network services through vSphere Distributed Switch (vDS) version 8 and NSX which provides better performance and at the same time is more secure. Because if the compute hypervisor gets compromised there is a lot of protection in the DPU layer that prevents the compromisation and followed by services would remain intact.

You may think Ahhh, having two ESXis means extra work for the installation and lifecycle of the infrastructure! but in reality, to install the ESXi on the DPU, you only need to tick a checkmark! And regarding the Lifecycle management, when you use vSphere Lifecycle Management or (vLCM) to upgrade or patch your ESXi you will see the ESXi installed on the DPU as a child object, so you upgrade your ESXi as usual and the second instance of your ESXi on the DPU will be upgraded too.

Having vSphere Distributed switch version 8 makes utilization of DPUs easy and fast. There is a drop-down menu whereby selecting Network Offload Compatibility, you can choose which one of the DPUs is currently in use and configure the rest of the configuration based on your need.

Network Offload Compatibility – vDS version 8 (VMware 2022)

The other point that I want to discuss in vSphere 8, is regarding vSphere Lifecycle Management.

First, bare in mind that the vSphere Update Manager(VUM) is deprecated in vSphere 8, so it means vSphere 8 is going to be the last vSphere release that supports baseline lifecycle management.

The other enhancement in vSphere Life Cycle Manager in vSphere 8 supports parallel upgrades. From my point of view, it’s a great feature because it can tremendously reduce the upgrade time, especially in the bigger clusters. But, at the time of writing this post, there are two key points to remember. Now you can also use vLCM to lifecycle a standalone host but keep in mind that you should use API for this matter.

First, to use this feature, the administrators need to put the ESXi host into maintenance mode manually. So the ESXi hosts that will be upgraded in parallel need to be specified and put in maintenance mode. Secondly, this feature is not enabled by default, and you need to enable it in the global configuration of the vLCM setting. Here you can also specify if you want the upgrade happens at any time, in an “automated” fashion or “manual,” which gives you more control over which hosts to be upgraded in parallel.

In vSphere 8, minimizing the dependency on vCenter for checking the latest status of the cluster is one of the other improvements. By utilizing distributed key-value, if the vCenter server fails and the backup is old, you will not lose any data.

But let’s check how the distributed key-value can minimize the reliance on the vCenter server. Each ESXi host in the cluster has a distributed key-value containing the cluster state, which becomes the source of truth.

So when the vCenter fails and recovers from the backup, it will query the cluster to know what has been changed while the vCenter is down. If there are any changes, the cluster will inform vCenter of the latest updates.

Moreover, vSphere Configuration Profiles which basically is a config store structure in the ESXi that could be a replacement for the host profiles is the other feature in vSphere 8 that is on the technical preview!

This feature is going to provide a consistent configuration across the clusters. So instead of manually exporting configuration from one host and managing the attachment of the host profile to ESXi hosts, a more unified method configuration will be applied at the cluster level. In addition, with a vSphere configuration profile like a host profile, you can check if the status of hosts in the cluster is compliant compared to the profile or not. and if not, you can remediate the host to have a consistent configuration across the cluster.

Virtual hardware version 20 has been introduced in vSphere 8! We can categorize the new features into three groups of

  • Virtual hardware innovation – Device Virtualization Extention among the other features in this group
  • Guest Services for Applications – Application-aware migration is one of the most interesting capabilities that has been enhanced.
  • Performance-based enhancements – Supporting device groups and virtual hyper-threading

In vSphere 8, utilizing NICs and GPUs is supported. One improvement in the area of AI/ML is supporting device groups which could be a combination of NICs and GPUs, but from the virtual machine configuration perspective, it would be detected as a single PCI device.

Add a new device group (VMware 2022)

With this feature, the virtual machines could consume the complementary devices easily, and customers do not need to search by themself to find out which combination of devices is a suitable candidate for their type of workloads. Because the hardware vendors groups the devices based on the use cases automatically. At the time of writing this post, customers are only consumers of device groups, and they can not choose or change the devices in the device groups. It’s also beneficial to know that the vSphere HA and DRS are aware of consuming device groups by VMs. So in case, one VM needs to be failover, then HA will place the VM on the suitable host.

One of the issues we all have faced is migrating a virtual machine that is consuming specific hardware like DirectPath I/O. In vSphere 8, Device Virtualization Extension is supported, which is a new framework and APIs for vendors to support more virtualization features like live migration. So, all you need to do is to install an appropriate Device Virtualization Extension driver both on the ESXi host and the guest OS, and then you can vMotion the virtual machine to another host that also supports the same kind of hard.

In the next blog post, I will go through vSAN 8 and the new architecture called vSAN Express Storage Architecture and it’s new feature and capabilities which to my point of view is unique and revolutionary!

Hope this help you.! 🙂