Enable vSphere with Tanzu on a Cluster with NSX-T as the Networking Stack
Through the vSphere Automation APIs, you can enable a vSphere cluster for managing Kubernetes workloads. A cluster configured with NSX-T Data Center supports running vSphere Pod and Tanzu Kubernetes clusters.
To enable a vSphere cluster for Kubernetes workload management, you use the services under the namespace_management package.
Prerequisites
-
Verify that your environment meets the system requirements for enabling vSphere with Tanzu on the cluster. For more information about the requirements, see the vSphere with Tanzu Configuration and Management documentation.
-
Verify that the NSX-T Data Center is installed and configured. See Configuring NSX-T Data Center for vSphere with Tanzu.
-
Create storage policies for the placement of pod ephemeral disks, container images, and Supervisor Cluster control plane cache.
-
Verify that DRS is enabled in fully automated mode and HA is also enabled on the cluster.
-
Configure shared storage for the cluster. Shared storage is required for vSphere DRS, HA, and storing persistent volumes of containers.
-
Verify that the user who you use to access the vSphere Automation services has the Modify cluster-wide configuration privilege on the cluster.
-
Create a subscribed content library on the vCenter Server system to accommodate the VM image that is used for creating the nodes of the Tanzu Kubernetes clusters.
Procedure
-
Retrieve the IDs of the tag-based storage policies that you configured for vSphere with
Tanzu.
Use the Policies service to retrieve a list of all storage policies and then filter the policies to get the IDs of the policies that you configured for the Supervisor Cluster.
-
Retrieve the IDs of the vSphere Distributed Switch and the NSX Edge cluster that you created when configuring the NSX-T Data Center for vSphere with
Tanzu.
Use the DistributedSwitchCompatibility service to list all vSphere Distributed Switches associated with the specific vSphere cluster and then retrieve the ID of the Distributed Switch that you configured to handle overlay networking for the Supervisor Cluster. Use the EdgeClusterCompatibility service to retrieve a list of the created NSX Edge clusters for the specific vSphere cluster and associated with the specific vSphere Distributed Switch. Retrieve the ID of the NSX Edge cluster that has the tier-0 gateway that you want to use for the namespaces networking.
-
Retrieve the ID of the port group for the management network that you configured for the management traffic.
Use the Networks service to list the visible networks available on the vCenter Server instance that match some criteria and then retrieve the ID of the management network you previously configured.
-
Create a Clusters.EnableSpec / ClustersTypes.EnableSpec instance and define the parameters of the Supervisor
Cluster that you want to create.
You must specify the following required parameters of the enable specification:
-
Storage policies settings and file volume support. The storage policy you set for each of the following parameters ensures that the respective object is placed on the datastore referenced in the storage policy. You can use the same or different storage policy for the different inventory objects.
Parameter
Description
ephemeral_storage_policy / setEphemeralStoragePolicy(java.lang.String ephemeralStoragePolicy)
Specify the ID of the storage policy that you created to control the storage placement of the vSphere Pods.
image_storage / setImageStorage(ClustersTypes.ImageStorageSpec imageStorage)
Set the specification of the storage policy that you created to control the placement of the cache of container images.
master_storage_policy / setMasterStoragePolicy(java.lang.String masterStoragePolicy)
Specify the ID of the storage policy that you created to control the placement of the Supervisor Cluster control plane cache.
Optionally, you can activate the file volume support by using cns_file_config / setCnsFileConfig(CNSFileConfig cnsFileConfig). See Enabling ReadWriteMany Support.
-
Management network settings. Configure the management traffic settings for the Supervisor Cluster control plane.
Parameter
Description
network_provider / setNetworkProvider(ClustersTypes.NetworkProvider networkProvider)
Specify the networking stack that must be used when the Supervisor Cluster is created. To use the NSX-T Data Center as the network solution for the cluster, select NSXT_CONTAINER_PLUGIN.
master_management_network / setMasterManagementNetwork(ClustersTypes.NetworkSpec masterManagementNetwork)
Enter the cluster network specification for the Supervisor Cluster control plane. You must enter values for the following required properties:
-
network / setNetwork(java.lang.String network) - Use the management network ID retrieved in Step 3.
-
mode / setMode(ClustersTypes.NetworkSpec.Ipv4Mode mode) - Set STATICRANGE or DHCP for the IPv4 address assignment mode. The DHCP mode allows an IPv4 address to be automatically assigned to the Supervisor Cluster control plane by a DHCP server. You must also set the floating IP address used by the HA primary cluster by using floating_IP / setFloatingIP(java.lang.String floatingIP). Use the DHCP mode only for test purposes. The STATICRANGE mode, allows the Supervisor Cluster control plane to have a stable IPv4 address. You can use it in a production environment.
-
address_range / setAddressRange(ClustersTypes.Ipv4Range addressRange) - Optionally, you can configure the IPv4 addresses range for one or more interfaces of the management network. Specify the following settings:
-
The starting IP address that must be used for reserving consecutive IP addresses for the Supervisor Cluster control plane. Use up to 5 consecutive IP addresses.
-
The number of IP addresses in the range.
-
The IP address of the gateway associated with the specified range.
-
The subnet mask to be used for the management network.
-
master_DNS / setMasterDNS(java.util.List<java.lang.String> masterDNS)
Enter a list of the DNS server addresses that must be used from the Supervisor Cluster control plane. If your vCenter Server instance is registered with an FQDN, you must enter the IP addresses of the DNS servers that you use with the vSphere environment so that the FQDN is resolvable in the Supervisor Cluster. The list of DNS addresses must be specified in the order of preference.
master_DNS_search_domains / setMasterDNSSearchDomains(java.util.List<java.lang.String> masterDNSSearchDomains)
Set a list of domain names that DNS searches when looking up for a host name in the Kubernetes API server. Order the domains in the list by preference.
master_NTP_servers / setMasterNTPServers(java.util.List<java.lang.String> masterNTPServers)
Specify a list of IP addresses or DNS names of the NTP server that you use in your environment, if any. Make sure that you configure the same NTP servers for the vCenter Server instance, all hosts in the cluster, the NSX-T Data Center, and vSphere with Tanzu. If you do not set an NTP server, VMware Tools time synchronization is enabled.
-
-
Workload network settings. Configure the settings for the networks for the namespaces. The namespace network settings provide connectivity to vSphere Pods and namespaces created in the Supervisor Cluster.
Parameter
Description
ncp_cluster_network_spec / setNcpClusterNetworkSpec(ClustersTypes.NCPClusterNetworkEnableSpec ncpClusterNetworkSpec)
Set the specification for the Supervisor Cluster configured with the NSX-T Data Center networking stack. Specify the following cluster networking configuration parameters for NCPClusterNetworkEnableSpec:
-
cluster_distributed_switch / setClusterDistributedSwitch(java.lang.String clusterDistributedSwitch) - The vSphere Distributed Switch that handles overlay networking for the Supervisor Cluster.
-
nsx_edge_cluster / setNsxEdgeCluster(java.lang.String nsxEdgeCluster) - The NSX Edge cluster that has tier-0 gateway that you want to use for namespace networking.
-
nsx_tier0_gateway / setNsxTier0Gateway(java.lang.String nsxTier0Gateway) - The tier-0 gateway that is associated with the cluster tier-1gateway. You can retrieve a list of NSXTier0Gateway objects associated with a particular vSphere Distributed Switch and determine the ID of the tier-0 gateway you want to set.
-
namespace_subnet_prefix / setNamespaceSubnetPrefix(java.lang.Long namespaceSubnetPrefix) - The subnet prefix that defines the size of the subnet reserved for namespaces segments. Default is 28.
-
routed_mode / setRoutedMode(java.lang.Boolean routedMode) - The NAT mode of the workload network. If set to false:
- The IP addresses of the workloads are directly accessible from outside the tier-o gateway and you do not need to configure the engress CIDRs.
- File Volume storage is not supported.
-
egress_cidrs / setEgressCidrs(java.util.List<Ipv4Cidr> egressCidrs) - The external CIDR blocks from which the NSX-T Manager assigns IP addresses used for performing source NAT (SNAT) from internal vSphere Pods IP addresses to external IP addresses. Only one egress IP address is assigned for each namespace in the Supervisor Cluster. These IP ranges must not overlap with the IP ranges of the vSphere Pods, ingress, Kubernetes services, or other services running in the data center.
-
ingress_cidrs / setIngressCidrs(java.util.List<Ipv4Cidr> ingressCidrs) - The external CIDR blocks from which the ingress IP range for the Kubernetes services is determined. These IP ranges are used for load balancer services and Kubernetes ingress. All Kubernetes ingress services in the same namespace share a common IP address. Each load balancer service is assigned a unique IP address. The ingress IP ranges must not overlap with the IP ranges of the vSphere Pods, egress, Kubernetes services, or other services running in the data center.
-
pod_cidrs / setPodCidrs(java.util.List<Ipv4Cidr> podCidrs) - The internal CIDR blocks from which the IP ranges for vSphere Pods are determined. The IP ranges must not overlap with the IP ranges of the ingress, egress, Kubernetes services, or other services running in the data center. All vSphere Pods CIDR blocks must be of at least /23 subnet size.
worker_DNS / setWorkerDNS(java.util.List<java.lang.String> workerDNS)
Set a list of the IP addresses of the DNS servers that must be used on the worker nodes. Use different DNS servers than the ones you set for the Supervisor Cluster control plane.
service_cidr / setServiceCidr(Ipv4Cidr serviceCidr)
Specify the CIDR block from which the IP addresses for Kubernetes services are allocated. The IP range must not overlap with the ranges of the vSphere Pods, ingress, egress, or other services running in the data center.
For the Kubernetes services and the vSphere Pods, you can use the default values which are based on the cluster size that you specify.
-
-
Supervisor Cluster size. You must set a size to the Supervisor Cluster which affects the resources allocated to the Kubernetes infrastructure. The cluster size also determines default maximum values for the IP addresses ranges for the vSphere Pods and Kubernetes services running in the cluster. You can use the ClusterSizeInfo.get() / GET https://<server>/api/vcenter/namespace-management/cluster-size-info calls to retrieve information about the default values associated with each cluster size.
- Optional. Associate the Supervisor
Cluster with the subscribed content
library that you created for provisioning Tanzu Kubernetes clusters. See Creating, Securing, and Synchronizing Content Libraries for Tanzu Kubernetes Releases.
To set the library, use default_kubernetes_service_content_library / setDefaultKubernetesServiceContentLibrary(java.lang.String defaultKubernetesServiceContentLibrary) and pass the subscribed content library ID.
-
- Enable vSphere with Tanzu on a specific cluster by passing the cluster enable specification to the Clusters service.
Results
A task runs on vCenter Server for turning the cluster into a Supervisor Cluster. Once the task completes, Kubernetes control plane nodes are created on the hosts that are part of the cluster enabled with vSphere with Tanzu. Now you can create vSphere Namespaces.
What to do next
Create and configure namespaces on the Supervisor Cluster. See Create a vSphere Namespace.