API Description | API Path |
---|---|
Returns information for Advanced Load Balancer controller clusterReturns information about Advanced Load Balancer controller cluster status |
GET /policy/api/v1/alb/controller-nodes/cluster
|
Re-trigger clustering for Advanced Load Balancer Nodes.Re-trigger clustering for Advanced Load Balancer Nodes. |
PUT /policy/api/v1/alb/controller-nodes/cluster
|
Returns info for of the cluster configuration for the Advanced Load Balancer controller clusterReturns cluster configuration for the Advanced Load Balancer controller cluster. |
GET /policy/api/v1/alb/controller-nodes/clusterconfig
|
Set the cluster configuration for the Advanced Load Balancer controller VM. The VIP can be set once. Attempting to change the VIP once set will return an error.Set the cluster configuration for Advanced Load Balancer controller cluster. |
POST /policy/api/v1/alb/controller-nodes/clusterconfig
|
Returns info for all cluster node VM auto-deployment attemptsReturns request information for every attempted deployment of a cluster node VM. |
GET /policy/api/v1/alb/controller-nodes/deployments
|
Deploy and register a Advanced Load Balancer controller node VMDeploys a Advanced Load Balancer controller node VM as specified by the deployment config. |
POST /policy/api/v1/alb/controller-nodes/deployments
|
Returns info for a Advanced Load Balancer controller deployment attemptReturns deployment request information for a specific attempted deployment of a cluster node VM. |
GET /policy/api/v1/alb/controller-nodes/deployments/{node-id}
|
Attempt to delete an auto-deployed Advanced Load Balancer controller nodeAttempts to unregister and undeploy a specified auto-deployed cluster node VM. If it is a member of a cluster, then the VM will be automatically detached from the cluster before being unregistered and undeployed. Alternatively, if the original deployment attempt failed or the VM is not found, cleans up the deployment information associated with the deployment attempt. Note: If a VM has been successfully auto-deployed, then the associated deployment information will not be deleted unless and until the VM is successfully deleted. |
POST /policy/api/v1/alb/controller-nodes/deployments/{node-id}?action=delete
|
Update a Advanced Load Balancer controller cluster node VM. Only updating password, ntp and dns servers are supported. If controller is in a cluster then all nodes in the cluster are updated with the provided valuesUpdate Advanced Load Balancer Controller node VM details |
PUT /policy/api/v1/alb/controller-nodes/deployments/{node-id}
|
Returns the status of the VM creation/deletionReturns the current deployment or undeployment status for a VM along with any other relevant current information, such as error messages. |
GET /policy/api/v1/alb/controller-nodes/deployments/{node-id}/status
|
List available Advanced Load Balancer controller form factorsReturns information about all form factors available for Advanced Load Balancer controller nodes. |
GET /policy/api/v1/alb/controller-nodes/form-factors
|
Register a Collection of API Calls at a Single End PointEnables you to make multiple API requests using a single request. The batch API takes in an array of logical HTTP requests represented as JSON arrays. Each request has a method (GET, PUT, POST, or DELETE), a relative_url (the portion of the URL after https://<nsx-mgr>/api/), optional headers array (corresponding to HTTP headers) and an optional body (for POST and PUT requests). The batch API returns an array of logical HTTP responses represented as JSON arrays. Each response has a status code, an optional headers array and an optional body (which is a JSON-encoded string). The batch API is not supported for any of the policy multi-tenancy related APIs. The multi-tenancy APIs start with the path /orgs/ This API is deprecated. Instead, use the hierarchical API in the NSX-T policy API. |
POST /policy/api/v1/batch
(Deprecated)
POST /api/v1/batch (Deprecated) |
Read Cluster ConfigurationReturns information about the NSX cluster configuration. An NSX cluster has two functions or purposes, commonly referred to as "roles." These two roles are control and management. Each NSX installation has a single cluster. Separate NSX clusters do not share data. In other words, a given data-plane node is attached to only one cluster, not to multiple clusters. |
GET /api/v1/cluster
|
Join this node to a NSX Cluster |
POST /api/v1/cluster?action=join_cluster
|
List Cluster ProfilesReturns paginated list of cluster profiles Cluster profiles define policies for edge cluster and bridge cluster. |
GET /api/v1/cluster-profiles
|
Create a Cluster ProfileCreate a cluster profile. The resource_type is required. |
POST /api/v1/cluster-profiles
|
Delete a cluster profileDelete a specified cluster profile. |
DELETE /api/v1/cluster-profiles/{cluster-profile-id}
|
Get cluster profile by IdReturns information about a specified cluster profile. |
GET /api/v1/cluster-profiles/{cluster-profile-id}
|
Update a cluster profileModifie a specified cluster profile. The body of the PUT request must include the resource_type. |
PUT /api/v1/cluster-profiles/{cluster-profile-id}
|
Read cluster node configurationReturns information about the specified NSX cluster node. |
GET /api/v1/cluster/{node-id}
|
Detach a node from the Cluster |
POST /api/v1/cluster/{node-id}?action=remove_node
|
Invoke DELETE request on target cluster node |
DELETE /api/v1/cluster/{target-node-id}/{target-uri}
|
Invoke GET request on target cluster node |
GET /api/v1/cluster/{target-node-id}/{target-uri}
|
Invoke POST request on target cluster node |
POST /api/v1/cluster/{target-node-id}/{target-uri}
|
Invoke PUT request on target cluster node |
PUT /api/v1/cluster/{target-node-id}/{target-uri}
|
Read cluster certificate IDReturns the ID of the certificate that is used as the cluster certificate for MP |
GET /api/v1/cluster/api-certificate
|
Clear the cluster certificateClears the certificate used for the MP cluster. This does not affect the certificate itself. This API is deprecated. Instead use the /api/v1/cluster/api-certificate?action=set_cluster_certificate API to set the cluster certificate to a different one. It just means that from now on, individual certificates will be used on each MP node. This affects all nodes in the cluster. |
POST /api/v1/cluster/api-certificate?action=clear_cluster_certificate
(Deprecated)
|
Set the cluster certificateSets the certificate used for the MP cluster. This affects all nodes in the cluster. If the certificate used is a CA signed certificate,the request fails if the whole chain(leaf, intermediate, root) is not imported. |
POST /api/v1/cluster/api-certificate?action=set_cluster_certificate
(Deprecated)
|
Read API service propertiesRead the configuration of the NSX API service. |
GET /api/v1/cluster/api-service
|
Update API service propertiesRead the configuration of the NSX API service. Changes are applied to all nodes in the cluster. The API service on each node will restart after it is updated using this API. There may be a delay of up to a minute or so between the time this API call completes and when the new configuration goes into effect. |
PUT /api/v1/cluster/api-service
|
Read cluster virtual IP addressesReturns the configured cluster virtual IPv4 and IPv6 address or null if not configured. |
GET /api/v1/cluster/api-virtual-ip
|
Clear cluster virtual IP addressClears the cluster virtual IPv4 or IPv6 address. Note, query parameter ?action=clear_virtual_ip clears virtual IPv4 address and ?action=clear_virtual_ip6 clears virtual IPv6 address. |
POST /api/v1/cluster/api-virtual-ip?action=clear_virtual_ip
POST /api/v1/cluster/api-virtual-ip?action=clear_virtual_ip6 |
Set cluster virtual IP addressesSets the cluster virtual IPv4 and IPv6 address. Note, all nodes in the management cluster must be in the same subnet. If not, a 409 CONFLICT status is returned. Query parameter ip_address sets virtual IPv4 address and ip6_address sets virtual IPv6 address; either or both of the parameters can be specified to set virtual IP address(es). When updating either of any one parameter value this does not changes the value of other unspecified parameter. |
POST /api/v1/cluster/api-virtual-ip?action=set_virtual_ip
|
Get backup frames for UIReturns list of backup frames and some metadata to be used by UI. |
GET /api/v1/cluster/backups/ui_frames
|
Synchronizes the repository data between nsx managers.Attempts to synchronize the repository partition on nsx manager. Repository partition contains packages required for the install and upgrade of nsx components.Normally there is no need to call this API explicitely by the user. |
POST /api/v1/cluster/node?action=repo_sync
|
List Cluster Node ConfigurationsReturns information about all NSX cluster nodes. |
GET /api/v1/cluster/nodes
|
Add a controller to the clusterAdd a new controller to the NSX cluster. The controller comes with the new node. |
POST /api/v1/cluster/nodes
(Deprecated)
|
Remove a controller from the clusterRemoves the specified controller from the NSX cluster. Before you can remove a controller from the cluster, you must shut down the controller service with the "stop service controller" command. |
DELETE /api/v1/cluster/nodes/{node-id}
(Deprecated)
|
Read Cluster Node ConfigurationReturns information about the specified NSX cluster node. |
GET /api/v1/cluster/nodes/{node-id}
(Deprecated)
|
List the specified node's Network InterfacesReturns the number of interfaces on the node and detailed information about each interface. Interface information includes MTU, broadcast and host IP addresses, link and admin status, MAC address, network mask, and the IP configuration method (static or DHCP). Note that if virtual IP (VIP) addresses are configured, virtual interfaces will not be returned. |
GET /api/v1/cluster/nodes/{node-id}/network/interfaces
|
Read the node's Network InterfaceReturns detailed information about the specified interface. Interface information includes MTU, broadcast and host IP addresses, link and admin status, MAC address, network mask, and the IP configuration method (static or DHCP). |
GET /api/v1/cluster/nodes/{node-id}/network/interfaces/{interface-id}
|
Read the NSX Manager/Controller's Network Interface StatisticsOn the specified interface, returns the number of received (rx), transmitted (tx), and dropped packets; the number of bytes and errors received and transmitted on the interface; and the number of detected collisions. |
GET /api/v1/cluster/nodes/{node-id}/network/interfaces/{interface-id}/stats
|
Synchronizes the repository data between nsx managers.Returns the synchronization status for the manager represented by given <node-id>. |
GET /api/v1/cluster/nodes/{node-id}/repo_sync/status
|
Read cluster node runtime statusRead aggregated runtime status of cluster node. |
GET /api/v1/cluster/nodes/{node-id}/status
|
Returns info for all cluster node VM auto-deployment attemptsReturns request information for every attempted deployment of a cluster node VM. |
GET /api/v1/cluster/nodes/deployments
|
Deploy and register a cluster node VMDeploys a cluster node VM as specified by the deployment config. Once the VM is deployed and powered on, it will automatically join the existing cluster. |
POST /api/v1/cluster/nodes/deployments
|
Returns info for a cluster-node VM auto-deployment attemptReturns deployment request information for a specific attempted deployment of a cluster node VM. |
GET /api/v1/cluster/nodes/deployments/{node-id}
|
Attempt to delete an auto-deployed cluster node VMAttempts to unregister and undeploy a specified auto-deployed cluster node VM. If it is a member of a cluster, then the VM will be automatically detached from the cluster before being unregistered and undeployed. Alternatively, if the original deployment attempt failed or the VM is not found, cleans up the deployment information associated with the deployment attempt. Note: If a VM has been successfully auto-deployed, then the associated deployment information will not be deleted unless and until the VM is successfully deleted. |
POST /api/v1/cluster/nodes/deployments/{node-id}?action=delete
|
Returns the status of the VM creation/deletionReturns the current deployment or undeployment status for a VM along with any other relevant current information, such as error messages. |
GET /api/v1/cluster/nodes/deployments/{node-id}/status
|
Read cluster runtime statusRead aggregated runtime status of all cluster nodes. Deprecated. Use GET /cluster/status instead. |
GET /api/v1/cluster/nodes/status
(Deprecated)
|
Read Cluster StatusReturns status information for the NSX cluster control role and management role. |
GET /api/v1/cluster/status
|
List all Central Node Config profilesReturns list of all Central Node Config profiles. |
GET /api/v1/configs/central-config/node-config-profiles
|
Configure Node Config profileUpdates properties in the specified Central Node Config profile. |
PUT /api/v1/configs/central-config/node-config-profiles/{node-config-profile-id}
|
Read Central Node Config profileReturns properties in specified Central Node Config profile. Sensitive data (like SNMP v2c community strings) are included only if query parameter "show_sensitive_data" is true. |
GET /api/v1/configs/central-config/node-config-profiles/{profile-id}
|
Return inventory configurationSupports retrieving following configuration of inventory module 1. Soft limit on number of compute managers that can be registered. |
GET /api/v1/configs/inventory
|
Read NSX Management nodes global configuration.Returns the NSX Management nodes global configuration. |
GET /policy/api/v1/configs/management
GET /api/v1/configs/management |
Update NSX Management nodes global configurationModifies the NSX Management nodes global configuration. |
PUT /policy/api/v1/configs/management
PUT /api/v1/configs/management |
NodeModeCurrently only a switch from "VMC_LOCAL" to "VMC" is supported. Returns a new Node Mode, if the request successfuly changed it. Optionally provisions public oauth2 client info. |
POST /api/v1/configs/node/mode
|
Scan the size of a directory domainThis call scans the size of a directory domain. It may be very | expensive to run this call in some AD domain deployments. Please | use it with caution. |
POST /api/v1/directory/domain-size
(Deprecated)
|
List all configured domains |
GET /api/v1/directory/domains
(Deprecated)
|
Create a directory domain |
POST /api/v1/directory/domains
(Deprecated)
|
Delete a specific domain with given identifier |
DELETE /api/v1/directory/domains/{domain-id}
(Deprecated)
|
Get a specific domain with given identifier |
GET /api/v1/directory/domains/{domain-id}
(Deprecated)
|
Invoke full sync or delta sync for a specific domain, with additional delay in seconds if needed. Stop sync will try to stop any pending sync if any to return to idle state. |
POST /api/v1/directory/domains/{domain-id}
(Deprecated)
|
Update a directory domainUpdate to any field in the directory domain will trigger a full sync |
PUT /api/v1/directory/domains/{domain-id}
(Deprecated)
|
Search for directory groups within a domain based on the substring of a distinguished name. (e.g. CN=User,DC=acme,DC=com) The search filter pattern can optionally support multiple (up to 100 maximum) search pattern separated by '|' (url encoded %7C). In this case, the search results will be returned as the union of all matching criteria. (e.g. CN=Ann,CN=Users,DC=acme,DC=com|CN=Bob,CN=Users,DC=acme,DC=com) |
GET /api/v1/directory/domains/{domain-id}/groups
(Deprecated)
|
List members of a directory groupA member group could be either direct member of the group specified by group_id or nested member of it. Both direct member groups and nested member groups are returned. |
GET /api/v1/directory/domains/{domain-id}/groups/{group-id}/member-groups
(Deprecated)
|
List all configured domain LDAP servers |
GET /api/v1/directory/domains/{domain-id}/ldap-servers
(Deprecated)
|
Create a LDAP server for directory domainMore than one LDAP server can be created and only one LDAP server is used to synchronize directory objects. If more than one LDAP server is configured, NSX will try all the servers until it is able to successfully connect to one. |
POST /api/v1/directory/domains/{domain-id}/ldap-servers
(Deprecated)
|
Delete a LDAP server for directory domain |
DELETE /api/v1/directory/domains/{domain-id}/ldap-servers/{server-id}
(Deprecated)
|
Get a specific LDAP server for a given directory domain |
GET /api/v1/directory/domains/{domain-id}/ldap-servers/{server-id}
(Deprecated)
|
Test a LDAP server connection for directory domainThe API tests a LDAP server connection for an already configured domain. If the connection is successful, the response will be HTTP status 200. Otherwise the response will be HTTP status 500 and corresponding error message will be returned. |
POST /api/v1/directory/domains/{domain-id}/ldap-servers/{server-id}
(Deprecated)
|
Update a LDAP server for directory domain |
PUT /api/v1/directory/domains/{domain-id}/ldap-servers/{server-id}
(Deprecated)
|
Fetch all organization units for a Directory domain. |
POST /api/v1/directory/domains/{domain-id}/org-units
(Deprecated)
|
Get domain sync statistics for the given identifier |
GET /api/v1/directory/domains/{domain-id}/sync-stats
(Deprecated)
|
Test a directory domain LDAP server connectivityThis API tests a LDAP server connectivity before the actual domain or LDAP server is configured. If the connectivity is good, the response will be HTTP status 200. Otherwise the response will be HTTP status 500 and corresponding error message will be returned. |
POST /api/v1/directory/ldap-server
(Deprecated)
|
Fetch all organization units for a LDAP server. |
POST /api/v1/directory/org-units
(Deprecated)
|
List Edge ClustersReturns information about the configured edge clusters, which enable you to group together transport nodes of the type EdgeNode and apply fabric profiles to all members of the edge cluster. Each edge node can participate in only one edge cluster. |
GET /api/v1/edge-clusters
|
Create Edge ClusterCreates a new edge cluster. It only supports homogeneous members. The TransportNodes backed by EdgeNode are only allowed in cluster members. DeploymentType (VIRTUAL_MACHINE|PHYSICAL_MACHINE) of these EdgeNodes is recommended to be the same. EdgeCluster supports members of different deployment types. |
POST /api/v1/edge-clusters
|
Delete Edge ClusterDeletes the specified edge cluster. |
DELETE /api/v1/edge-clusters/{edge-cluster-id}
|
Read Edge ClusterReturns information about the specified edge cluster. |
GET /api/v1/edge-clusters/{edge-cluster-id}
|
Replace the transport node in the specified member of the edge-clusterReplace the transport node in the specified member of the edge-cluster. This is a disruptive action. This will move all the LogicalRouterPorts(uplink and routerLink) host on the old transport_node to the new transport_node. The transportNode cannot be present in another member of any edgeClusters. |
POST /api/v1/edge-clusters/{edge-cluster-id}?action=replace_transport_node
|
Relocate service contexts from edge and remove edge node from the edge-clusterRelocate auto allocated service contexts from edge node at given index. For API to perform relocate and remove action the edge node at given index must only have auto allocated service contexts. If any manually allocated service context is present on the edge cluster member, then the task will not be performed. Also, it is recommended to move edge node for which relocate and remove action is being performed into maintenance mode, before executing the API. If edge is not not moved into maintenance mode, then API will move edge node into maintenance mode before performing the actual relocate and remove task.To maintain high availability, Edge cluster should have at least two healthy edge nodes for relocate and removal. Once relocate action is performed successfully, the edge node will be removed from the edge cluster. |
POST /api/v1/edge-clusters/{edge-cluster-id}?action=relocate_and_remove
|
Update Edge ClusterModifies the specified edge cluster. Modifiable parameters include the description, display_name, transport-node-id. If the optional fabric_profile_binding is included, resource_type and profile_id are required. User should do a GET on the edge-cluster and obtain the payload and retain the member_index of the existing members as returning in the GET output. For new member additions, the member_index cannot be defined by the user, user can read the system allocated index to the new member in the output of this API call or by doing a GET call. User cannot use this PUT api to replace the transport_node of an existing member because this is a disruption action, we have exposed a explicit API for doing so, refer to "ReplaceEdgeClusterMemberTransportNode" EdgeCluster only supports homogeneous members. The TransportNodes backed by EdgeNode are only allowed in cluster members. DeploymentType (VIRTUAL_MACHINE|PHYSICAL_MACHINE) of these EdgeNodes is recommended to be the same. EdgeCluster supports members of different deployment types. |
PUT /api/v1/edge-clusters/{edge-cluster-id}
|
Get the Allocation details of an edge clusterReturns the allocation details of cluster and its members. Lists the edge node members, active and standby services of each node, utilization details of configured sub-pools. These allocation details can be monitored by customers to trigger migration of certain service contexts to different edge nodes, to balance the utilization of edge node resources. |
GET /api/v1/edge-clusters/{edge-cluster-id}/allocation-status
|
Get the Realized State of a Edge ClusterReturn realized state information of a edge cluster. Any configuration update that affects the edge cluster can use this API to get its realized state by passing a request_id returned by the configuration change operation. e.g. Update configuration of edge cluster. |
GET /api/v1/edge-clusters/{edge-cluster-id}/state
|
Get the status for the Edge cluster of the given idReturns the aggregated status for the Edge cluster along with status of all edge nodes in the cluster. Query parameter "source=realtime" is the only supported source. |
GET /api/v1/edge-clusters/{edge-cluster-id}/status
|
Returns the List of cloud native service instancesReturns information about all cloud native service instances. |
GET /api/v1/fabric/cloud-native-service-instances
|
Returns information about a particular cloud native service instance by external-id.Returns information about a particular cloud native service instance by external-id. |
GET /api/v1/fabric/cloud-native-service-instances/{external-id}
|
Return the List of Compute CollectionsReturns information about all compute collections. |
GET /api/v1/fabric/compute-collections
|
Return Compute Collection InformationReturns information about a specific compute collection. |
GET /api/v1/fabric/compute-collections/{cc-ext-id}
|
Perform action specific to NSX on the compute-collection. cc-ext-id should be of type VC_Cluster. |
POST /api/v1/fabric/compute-collections/{cc-ext-id}
|
Get status of member host nodes of the compute-collection. Only nsx prepared host nodes in the specified compute-collection are included in the response. cc-ext-id should be of type VC_Cluster. |
GET /api/v1/fabric/compute-collections/{cc-ext-id}/member-status
|
List the Physical Network Interface for all discovered nodesReturns list of physical network interfaces for all discovered nodes in compute collection. Interface information includes PNIC name, hostswitch name it's attached to(if any) and MAC address. |
GET /api/v1/fabric/compute-collections/{cc-ext-id}/network/physical-interfaces
|
Return the List of Compute managersReturns information about all compute managers. |
GET /api/v1/fabric/compute-managers
|
Register compute manager with NSXRegisters compute manager with NSX. Inventory service will collect data from the registered compute manager |
POST /api/v1/fabric/compute-managers
|
Unregister a compute managerUnregisters a specified compute manager |
DELETE /api/v1/fabric/compute-managers/{compute-manager-id}
|
Return compute manager InformationReturns information about a specific compute manager |
GET /api/v1/fabric/compute-managers/{compute-manager-id}
|
Update compute managerUpdates a specified compute manager |
PUT /api/v1/fabric/compute-managers/{compute-manager-id}
|
Get the realized state of a compute manager |
GET /api/v1/fabric/compute-managers/{compute-manager-id}/state
|
Return runtime status information for a compute managerReturns connection and version information about a compute manager |
GET /api/v1/fabric/compute-managers/{compute-manager-id}/status
|
Return thumbprint hashing algorithm config used for compute manager extensionSupports retrieving of configuration of algorithm used for thumbprint hashing used in stamping nsx manager thumbprint in compute manager extension. |
GET /api/v1/fabric/compute-managers/thumbprint-hashing-algorithm
|
Update thumbprint hashing algorithm used for compute manager extensionUpdates configuration of algorithm used for thumbprint hashing used in stamping nsx manager thumbprint in compute manager extension. Changing this setting to SHA256 will result in communication issues between WCP component in VC and NSX manager. Hence it is recommended not to use SHA256 if VC WCP feature is being used with NSX. |
PUT /api/v1/fabric/compute-managers/thumbprint-hashing-algorithm
|
Return the List of Discovered NodesReturns information about all discovered nodes. |
GET /api/v1/fabric/discovered-nodes
|
Return Discovered Node InformationReturns information about a specific discovered node. |
GET /api/v1/fabric/discovered-nodes/{node-ext-id}
|
Apply cluster level config on Discovered NodeWhen transport node profile (TNP) is applied to a cluster, if any validation fails (e.g. VMs running on host) then transport node (TN) is not created. In that case after the required action is taken (e.g. VMs powered off), you can call this API to try to create TN for that discovered node. Do not call this API if Transport Node already exists for the discovered node. In that case use API on transport node. /transport-nodes/<transport-node-id>?action=restore_cluster_config |
POST /api/v1/fabric/discovered-nodes/{node-ext-id}?action=reapply_cluster_config
|
Created Transport Node for Discovered NodeNSX components are installaed on host and transport node is created with given configurations. |
POST /api/v1/fabric/discovered-nodes/{node-ext-id}?action=create_transport_node
|
Return list of supported host OS typesReturns names of all supported host OS. |
GET /api/v1/fabric/ostypes
|
Return the list of physical serversReturns information of all physical/bare metal servers registered as TN. |
GET /api/v1/fabric/physical-servers
|
Return a specific physical serverReturns information about physical/bare metal server based on given transport node id. |
GET /api/v1/fabric/physical-servers/{physical-server-id}
|
Retrieve scope associations for discovered resourcesRetrieve scope associations for discovered resources |
GET /api/v1/fabric/scope-associations
|
Add scope associations for discovered resourcesAdd scope associations for discovered resources |
POST /api/v1/fabric/scope-associations?action=add
|
Delete scope associations for discovered resourcesDelete scope associations for discovered resources |
POST /api/v1/fabric/scope-associations?action=delete
|
List Failure DomainsReturns information about configured failure domains. |
GET /api/v1/failure-domains
|
Create Failure DomainCreates a new failure domain. |
POST /api/v1/failure-domains
|
Delete Failure DomainDeletes an existing failure domain. You can not delete system generated default failure domain. |
DELETE /api/v1/failure-domains/{failure-domain-id}
|
Get a Failure DomainReturns information about a single failure domain. |
GET /api/v1/failure-domains/{failure-domain-id}
|
Update Failure DomainUpdates an existing failure domain. Modifiable parameters are display_name, preferred_active_edge_services flag. |
PUT /api/v1/failure-domains/{failure-domain-id}
|
List global configurations of a NSX domainReturns global configurations of a NSX domain grouped by the config types. These global configurations are valid across NSX domain for their respective types unless they are overridden by a more granular configurations. This rest routine is deprecated, and will be removed after a year. |
GET /api/v1/global-configs
|
Get global configs for a config typeReturns global configurations that belong to the config type. This rest routine is deprecated, and will be removed after a year. |
GET /api/v1/global-configs/{config-type}
|
Resyncs global configurations of a config-typeIt is similar to update global configurations but this request would trigger update even if the configs are unmodified. However, the realization of the new configurations is config-type specific. Refer to config-type specific documentation for details about the configuration push state. This rest routine is deprecated, and will be removed after a year. |
PUT /api/v1/global-configs/{config-type}?action=resync_config
|
Update global configurations of a config typeUpdates global configurations that belong to a config type. The request must include the updated values along with the unmodified values. The values that are updated(different) would trigger update to config-type specific state. However, the realization of the new configurations is config-type specific. Refer to config-type specific documentation for details about the config- uration push state. This rest routine is deprecated, and will be removed after a year. |
PUT /api/v1/global-configs/{config-type}
|
List Hostswitch ProfilesReturns information about the configured hostswitch profiles. Hostswitch profiles define networking policies for hostswitches (sometimes referred to as bridges in OVS). Currently, only uplink teaming is supported. Uplink teaming allows NSX to load balance traffic across different physical NICs (PNICs) on the hypervisor hosts. Multiple teaming policies are supported, including LACP active, LACP passive, load balancing based on source ID, and failover order. |
GET /api/v1/host-switch-profiles
(Deprecated)
|
Create a Hostswitch ProfileCreates a hostswitch profile. The resource_type is required. For uplink profiles, the teaming and policy parameters are required. By default, the mtu is 1600 and the transport_vlan is 0. The supported MTU range is 1280 through (uplink_mtu_threshold). (uplink_mtu_threshold) is 9000 by default. Range can be extended by modifying (uplink_mtu_threshold) in SwitchingGlobalConfig to the required upper threshold. |
POST /api/v1/host-switch-profiles
(Deprecated)
|
Delete a Hostswitch ProfileDeletes a specified hostswitch profile. |
DELETE /api/v1/host-switch-profiles/{host-switch-profile-id}
(Deprecated)
|
Get a Hostswitch Profile by IDReturns information about a specified hostswitch profile. |
GET /api/v1/host-switch-profiles/{host-switch-profile-id}
(Deprecated)
|
Update a Hostswitch ProfileModifies a specified hostswitch profile. The body of the PUT request must include the resource_type. For uplink profiles, the put request must also include teaming parameters. Modifiable attributes include display_name, mtu, and transport_vlan. For uplink teaming policies, uplink_name and policy are also modifiable. |
PUT /api/v1/host-switch-profiles/{host-switch-profile-id}
(Deprecated)
|
Alb Auth Token API calls to Avi Controller.Passthorugh API calls to Avi controller using the Auth of policy API request send to this API will be passthrough to Avi controller and Avi controller response will be embedded for this API. |
PUT /policy/api/v1/infra/alb-auth-token
|
This is Post Controller Deployment Workflow. It will create role if not exist Create Service User Set System Configuration Create Enforcement Point Save Infra-Admin Creds to DB.Set the post deployment cluster configuration for Advanced Load Balancer controller cluster. |
PUT /policy/api/v1/infra/alb-onboarding-workflow
|
Delete EnforcementPoint and the Infra Admin credentials contained by this workflowDelete the EnforcementPoint along with Infra Admin credentials. |
DELETE /policy/api/v1/infra/alb-onboarding-workflow/{managed-by}
|
Create a Event Log server for Firewall Identity storeMore than one Event Log server can be created and only one event log server is used to synchronize directory objects. If more than one Event Log server is configured, NSX will try all the servers until it is able to successfully connect to one. |
PATCH /policy/api/v1/infra/firewall-identity-stores/{firewall-identity-store-id}/event-log-servers/{event-log-server-id}
|
List Hostswitch ProfilesReturns information about the configured hostswitch profiles. Hostswitch profiles define networking policies for hostswitches (sometimes referred to as bridges in OVS). Currently, following profiles are supported. UplinkHostSwitchProfile, LldpHostSwitchProfile, NiocProfile & ExtraConfigHostSwitchProfile. Uplink profile - teaming defined in this profile allows NSX to load balance traffic across different physical NICs (PNICs) on the hypervisor hosts. Multiple teaming policies are supported, including LACP active, LACP passive, load balancing based on source ID, and failover order. Lldp profile - Enable or disable sending LLDP packets NiocProfile - Network I/O Control settings: defines limits, shares and reservations for various host traffic types. ExtraConfig - Vendor specific configuration on HostSwitch, logical switch or logical port |
GET /policy/api/v1/infra/host-switch-profiles
|
Delete a Hostswitch ProfileDeletes a specified hostswitch profile. |
DELETE /policy/api/v1/infra/host-switch-profiles/{host-switch-profile-id}
|
Get a Hostswitch Profile by IDReturns information about a specified hostswitch profile. |
GET /policy/api/v1/infra/host-switch-profiles/{host-switch-profile-id}
|
Create or update a Hostswitch ProfilePatch a hostswitch profile. The resource_type is required and needs to be one of the following, UplinkHostSwitchProfile, LldpHostSwitchProfile, NiocProfile & ExtraConfigHostSwitchProfile. Uplink profile - For uplink profiles, the teaming and policy parameters are required. By default, the mtu is 1600 and the transport_vlan is 0. The supported MTU range is 1280 through (uplink_mtu_threshold). uplink_mtu_threshold is 9000 by default. Range can be extended by modifying (uplink_mtu_threshold) in SwitchingGlobalConfig to the required upper threshold. Teaming defined in this profile allows NSX to load balance traffic across different physical NICs (PNICs) on the hypervisor hosts. Multiple teaming policies are supported, including LACP active, LACP passive, load balancing based on source ID, and failover order. Lldp profile - Enable or disable sending LLDP packets NiocProfile - Network I/O Control settings: defines limits, shares and reservations for various host traffic types. ExtraConfig - Vendor specific configuration on HostSwitch, logical switch or logical port |
PATCH /policy/api/v1/infra/host-switch-profiles/{host-switch-profile-id}
|
Create or update a Hostswitch ProfileCreate or update a hostswitch profile. The resource_type is required and needs to be one of the following, UplinkHostSwitchProfile, LldpHostSwitchProfile, NiocProfile & ExtraConfigHostSwitchProfile. Uplink profile - For uplink profiles, the teaming and policy parameters are required. By default, the mtu is 1600 and the transport_vlan is 0. The supported MTU range is 1280 through (uplink_mtu_threshold). uplink_mtu_threshold is 9000 by default. Range can be extended by modifying (uplink_mtu_threshold) in SwitchingGlobalConfig to the required upper threshold. Teaming defined in this profile allows NSX to load balance traffic across different physical NICs (PNICs) on the hypervisor hosts. Multiple teaming policies are supported, including LACP active, LACP passive, load balancing based on source ID, and failover order. Lldp profile - Enable or disable sending LLDP packets NiocProfile - Network I/O Control settings: defines limits, shares and reservations for various host traffic types. ExtraConfig - Vendor specific configuration on HostSwitch, logical switch or logical port |
PUT /policy/api/v1/infra/host-switch-profiles/{host-switch-profile-id}
|
List Host Transport NodesReturns information about all host transport node profiles. |
GET /policy/api/v1/infra/host-transport-node-profiles
|
Get a Transport NodeReturns information about a specified host transport node profile. |
GET /policy/api/v1/infra/host-transport-node-profiles/{host-transport-node-profile-id}
|
Delete a Host Transport Node ProfileDeletes the specified host transport node profile. A host transport node profile can be deleted only when it is not attached to any compute collection. |
DELETE /policy/api/v1/infra/host-transport-node-profiles/{transport-node-profile-id}
|
Update a Host Transport Node ProfileHost transport node profile captures the configuration needed to create a host transport node. A host transport node profile can be attached to compute collections for automatic host transport node creation of member hosts. When configurations of a host transport node profile(TNP) are updated, all the host transport nodes in all the compute collections to which this TNP is attached are updated to reflect the updated configuration. |
PUT /policy/api/v1/infra/host-transport-node-profiles/{transport-node-profile-id}
|
Create a Event Log server for Firewall Identity storeMore than one Event Log server can be created and only one event log server is used to synchronize directory objects. If more than one Event Log server is configured, NSX will try all the servers until it is able to successfully connect to one. |
PATCH /policy/api/v1/infra/identity-firewall-stores/{identity-firewall-store-id}/event-log-servers/{event-log-server-id}
|
List Cluster Control Planelist all Cluster Control Planes |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/cluster-control-planes
|
Delete a Cluster Control Plane NodeDelete a Cluster Control Plane Node |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/cluster-control-planes/{cluster-control-plane-id}
|
Get a Cluster Control PlaneReturns information about a specified Cluster Control Plane . |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/cluster-control-planes/{cluster-control-plane-id}
|
Create or Update Cluster Control Plane to NSX-TJoins a Cluster Control Plane to NSX-T |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/cluster-control-planes/{cluster-control-plane-id}
|
List Host Transport NodesReturns information about all host transport nodes along with underlying host details. A transport node is a host that contains hostswitches. A hostswitch can have virtual machines connected to them. Because each transport node has hostswitches, transport nodes can also have virtual tunnel endpoints, which means that they can be part of the overlay. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes
|
Delete a Transport NodeDeletes the specified transport node. Query param force can be used to force delete the host nodes. Force delete is not supported if transport node is part of a cluster on which Transport node profile is applied. It also removes the specified host node from system. If unprepare_host option is set to false, then host will be deleted without uninstalling the NSX components from the host. If transport node delete is called with query param force not being set or set to false and uninstall of NSX components in the host fails, TransportNodeState object will be retained. If transport node delete is called with query param force set to true and uninstall of NSX components in the host fails, TransportNodeState object will be deleted. |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Get a Host Transport NodeReturns information about a specified transport node. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Patch a Host Transport NodeTransport nodes are hypervisor hosts that will participate in an NSX-T overlay. For a hypervisor host, this means that it hosts VMs that will communicate over NSX-T logical switches. This API creates transport node for a host node (hypervisor) in the transport network. When you run this command for a host, NSX Manager attempts to install the NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the installation to succeed, you must provide the host login credentials and the host thumbprint. To get the ESXi host thumbprint, SSH to the host and run the openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout command. To generate host key thumbprint using SHA-256 algorithm please follow the steps below. Log into the host, making sure that the connection is not vulnerable to a man in the middle attack. Check whether a public key already exists. Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'. If the key is not present then generate a new key by running the following command and follow the instructions. ssh-keygen -t rsa Now generate a SHA256 hash of the key using the following command. Please make sure to pass the appropriate file name if the public key is stored with a different file name other than the default 'id_rsa.pub'. awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64 Additional documentation on creating a transport node can be found in the NSX-T Installation Guide. In order for the transport node to forward packets, the host_switch_spec property must be specified. Host switches (called bridges in OVS on KVM hypervisors) are the individual switches within the host virtual switch. Virtual machines are connected to the host switches. When creating a transport node, you need to specify if the host switches are already manually preconfigured on the node, or if NSX should create and manage the host switches. You specify this choice by the type of host switches you pass in the host_switch_spec property of the TransportNode request payload. For a KVM host, you can preconfigure the host switch, or you can have NSX Manager perform the configuration. For an ESXi host NSX Manager always configures the host switch. To preconfigure the host switches on a KVM host, pass an array of PreconfiguredHostSwitchSpec objects that describes those host switches. In the current NSX-T release, only one prefonfigured host switch can be specified. See the PreconfiguredHostSwitchSpec schema definition for documentation on the properties that must be provided. Preconfigured host switches are only supported on KVM hosts, not on ESXi hosts. To allow NSX to manage the host switch configuration on KVM hosts, ESXi hosts, pass an array of StandardHostSwitchSpec objects in the host_switch_spec property, and NSX will automatically create host switches with the properties you provide. In the current NSX-T release, up to 16 host switches can be automatically managed. See the StandardHostSwitchSpec schema definition for documentation on the properties that must be provided. The request should provide node_deployement_info. |
PATCH /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Update transport node maintenance modePut transport node into maintenance mode or exit from maintenance mode. When HostTransportNode is in maintenance mode no configuration changes are allowed |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Resync a Host Transport NodeResync the TransportNode configuration on a host. It is similar to updating the TransportNode with existing configuration, but force synce these configurations to the host (no backend optimizations). |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}?action=resync_host_config
|
Apply cluster level Transport Node Profile on overridden hostA host can be overridden to have different configuration than Transport Node Profile(TNP) on cluster. This action will restore such overridden host back to cluster level TNP. This API can be used in other case. When TNP is applied to a cluster, if any validation fails (e.g. VMs running on host) then existing transport node (TN) is not updated. In that case after the issue is resolved manually (e.g. VMs powered off), you can call this API to update TN as per cluster level TNP. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}?action=restore_cluster_config
|
Create or update a Host Transport NodeTransport nodes are hypervisor hosts that will participate in an NSX-T overlay. For a hypervisor host, this means that it hosts VMs that will communicate over NSX-T logical switches. This API creates transport node for a host node (hypervisor) in the transport network. When you run this command for a host, NSX Manager attempts to install the NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the installation to succeed, you must provide the host login credentials and the host thumbprint. To get the ESXi host thumbprint, SSH to the host and run the openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout command. To generate host key thumbprint using SHA-256 algorithm please follow the steps below. Log into the host, making sure that the connection is not vulnerable to a man in the middle attack. Check whether a public key already exists. Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'. If the key is not present then generate a new key by running the following command and follow the instructions. ssh-keygen -t rsa Now generate a SHA256 hash of the key using the following command. Please make sure to pass the appropriate file name if the public key is stored with a different file name other than the default 'id_rsa.pub'. awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64 Additional documentation on creating a transport node can be found in the NSX-T Installation Guide. In order for the transport node to forward packets, the host_switch_spec property must be specified. Host switches (called bridges in OVS on KVM hypervisors) are the individual switches within the host virtual switch. Virtual machines are connected to the host switches. When creating a transport node, you need to specify if the host switches are already manually preconfigured on the node, or if NSX should create and manage the host switches. You specify this choice by the type of host switches you pass in the host_switch_spec property of the TransportNode request payload. For a KVM host, you can preconfigure the host switch, or you can have NSX Manager perform the configuration. For an ESXi host NSX Manager always configures the host switch. To preconfigure the host switches on a KVM host, pass an array of PreconfiguredHostSwitchSpec objects that describes those host switches. In the current NSX-T release, only one prefonfigured host switch can be specified. See the PreconfiguredHostSwitchSpec schema definition for documentation on the properties that must be provided. Preconfigured host switches are only supported on KVM hosts, not on ESXi hosts. To allow NSX to manage the host switch configuration on KVM hosts, ESXi hosts, pass an array of StandardHostSwitchSpec objects in the host_switch_spec property, and NSX will automatically create host switches with the properties you provide. In the current NSX-T release, up to 16 host switches can be automatically managed. See the StandardHostSwitchSpec schema definition for documentation on the properties that must be provided. The request should provide node_deployement_info. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}
|
Fetch Discovered VIF State on given TransportNodeFor the given TransportNode, fetch all the VIF info from VC and return the corresponding state. Only host switch configured for security will be considered. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/discovered-vifs
|
Get the module details of a host transport node |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/modules
|
Get a Host Transport Node's StateReturns information about the current state of the transport node configuration and information about the associated hostswitch. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/state
|
Submit a new TransportNode VTEP actionSubmit a new VTEP action for a particular TransportNode. The status of submitted actions could be retrieved using the ListTransportNodeVtepActionsStatus API. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/vteps/actions
|
List all TransportNode VTEP actions' statusList all VTEP actions' status for a particular TransportNode. If some action status is missing in the response, that indicates the action has completed successfully. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/{host-transport-node-id}/vteps/actions/status
|
List transport nodes by realized stateReturns a list of transport node states that have realized state as provided as query parameter. If this API is called multiple times in parallel then it will fail with error indicating that another request is already in progress. In such case, try the API on another NSX manager instance (if exists) or try again after some time. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/host-transport-nodes/state
|
List sub-clustersPaginated list of all sub-clusters. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters
|
Move host from one sub-cluster to another sub-cluster. When a node is moved from one sub-cluster to another sub-cluster, based on the TransportNodeCollection configuration appropriate sub-configuration will be applied to the node. If TransportNodeCollection does not have sub-configurations for the sub-cluster, then global configuration will be applied. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters?action=move
|
Delete a Sub-ClusterDelete a Sub-Cluster. Deletion will not be allowed if sub-cluster contains discovered nodes. |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
Read a Sub-cluster configurationRead a Sub-cluster configuration. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
Patch Sub-ClusterPatch a sub-cluster under compute collection. |
PATCH /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
Create or Update a sub-clusterCreate or update a sub-cluster under a compute collection. Maximum number of sub-clusters that can be created under a compute collection is 16. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/sub-clusters/{subcluster-id}
|
List Transport Node collectionsReturns all Transport Node collections |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections
|
Detach transport node profile from compute collection.By deleting transport node collection, we are detaching the transport node profile(TNP) from the compute collection. It has no effect on existing transport nodes. However, new hosts added to the compute collection will no longer be automatically converted to NSX transport node. Detaching TNP from compute collection does not delete TNP. |
DELETE /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}
|
Get Transport Node collection by idReturns transport node collection by id |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}
|
Patch Transport Node collectionAttach different transport node profile to compute collection by updating transport node collection. |
PATCH /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}
|
Retry the process on applying transport node profileThis API is relevant for compute collection on which vLCM is enabled. This API should be invoked to retry the realization of transport node profile on the compute collection. This is useful when profile realization had failed because of error in vLCM. This API has no effect if vLCM is not enabled on the computer collection. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}?action=retry_profile_realization
|
Uninstall NSX from the specified Transport Node CollectionThis API uninstalls NSX applied to the Transport Node Collection with the ID corresponding to the one specified in the request. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}?action=remove_nsx
|
Configure the cluster for securityThis API configures a compute collection for security. In the request body, specify a Transport Node Collection with only the ID of the target compute collection meant for security. Specifically, a Transport Node Profile ID should not be specified. This API will define a system-generated security Transport Node Profile and apply it on the compute collection to create the Transport Node Collection. |
POST /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}?action=install_for_microseg
|
Get Transport Node collection application stateReturns the state of transport node collection based on the states of transport nodes of the hosts which are part of compute collection. |
GET /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collection-id}/state
|
Create transport node collection by attaching Transport Node Profile to cluster.When transport node collection is created the hosts which are part of compute collection will be prepared automatically i.e. NSX Manager attempts to install the NSX components on hosts. Transport nodes for these hosts are created using the configuration specified in transport node profile. Pass apply_profile to false, if you do not want to apply transport node profile on the existing transport node with overridden host flag set and ignore overridden hosts flag is set to true on the transport node profile. |
PUT /policy/api/v1/infra/sites/{site-id}/enforcement-points/{enforcementpoint-id}/transport-node-collections/{transport-node-collections-id}
|
List LLDP Neighbor Properties of Fabric NodeList LLDP Neighbor Properties for all interfaces of Fabric Node |
GET /api/v1/lldp/fabric-nodes/{fabric-node-id}/interfaces
|
Read LLDP Neighbor Properties of Fabric Node by Interface NameRead LLDP Neighbor Properties for a specific interface of Fabric Node |
GET /api/v1/lldp/fabric-nodes/{fabric-node-id}/interfaces/{interface-name}
|
List LLDP Neighbor Properties of Transport NodeList LLDP Neighbor Properties for all interfaces of Transport Node |
GET /api/v1/lldp/transport-nodes/{node-id}/interfaces
|
Read LLDP Neighbor Properties of Transport Node by Interface NameRead LLDP Neighbor Properties for a specific interface of Transport Node |
GET /api/v1/lldp/transport-nodes/{node-id}/interfaces/{interface-name}
|
Read node propertiesReturns information about the NSX appliance. Information includes release number, time zone, system time, kernel version, message of the day (motd), and host name. |
GET /api/v1/transport-nodes/{transport-node-id}/node
GET /api/v1/cluster/{cluster-node-id}/node GET /api/v1/node |
Restart or shutdown nodeRestarts or shuts down the NSX appliance. |
POST /api/v1/transport-nodes/{transport-node-id}/node?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node?action=shutdown POST /api/v1/cluster/{cluster-node-id}/node?action=restart POST /api/v1/cluster/{cluster-node-id}/node?action=shutdown POST /api/v1/node?action=restart POST /api/v1/node?action=shutdown |
Set the node system timeSet the node system time to the given time in UTC in the RFC3339 format 'yyyy-mm-ddThh:mm:ssZ'. |
POST /api/v1/transport-nodes/{transport-node-id}/node?action=set_system_time
POST /api/v1/cluster/{cluster-node-id}/node?action=set_system_time POST /api/v1/node?action=set_system_time |
Update node propertiesModifies NSX appliance properties. Modifiable properties include the timezone, message of the day (motd), and hostname. The NSX appliance node_version, system_time, and kernel_version are read only and cannot be modified with this method. |
PUT /api/v1/transport-nodes/{transport-node-id}/node
PUT /api/v1/cluster/{cluster-node-id}/node PUT /api/v1/node |
Read node authentication policy and password complexity configurationReturns information about the currently configured authentication policies and password complexity on the node. |
GET /api/v1/transport-nodes/{transport-node-id}/node/aaa/auth-policy
GET /api/v1/cluster/{cluster-node-id}/node/aaa/auth-policy GET /api/v1/node/aaa/auth-policy |
Resets node authentication policy and password complexity configurationResets to default, currently configured authentication policy and password complexity on the node. Administrators need to enforce password change for existing user accounts in order to match newly configured complexity requirements in system. reset-all: resets configured Authentication policy and Password complexity reset-auth-policies: resets only configured Authentication policy reset-pwd-complexity: resets only configured Password complexity |
POST /api/v1/transport-nodes/{transport-node-id}/node/aaa/auth-policy?action=reset-all
POST /api/v1/transport-nodes/{transport-node-id}/node/aaa/auth-policy?action=reset-auth-policies POST /api/v1/transport-nodes/{transport-node-id}/node/aaa/auth-policy?action=reset-pwd-complexity POST /api/v1/cluster/{cluster-node-id}/node/aaa/auth-policy?action=reset-all POST /api/v1/cluster/{cluster-node-id}/node/aaa/auth-policy?action=reset-auth-policies POST /api/v1/cluster/{cluster-node-id}/node/aaa/auth-policy?action=reset-pwd-complexity POST /api/v1/node/aaa/auth-policy?action=reset-all POST /api/v1/node/aaa/auth-policy?action=reset-auth-policies POST /api/v1/node/aaa/auth-policy?action=reset-pwd-complexity |
Update node authentication policy and password complexity configurationUpdate the currently configured authentication policy and password complexity on the node. If any of api_max_auth_failures, api_failed_auth_reset_period, or api_failed_auth_lockout_period are modified, the http service is automatically restarted. Whereas change in any password complexity will not be applicable on already configured user passwords. Administrators need to enforce password change for existing user accounts in order to match newly configured complexity requirements enforced in system. All values from AuthenticationPolicyProperties are in sync among the management cluster nodes. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/aaa/auth-policy
PUT /api/v1/cluster/{cluster-node-id}/node/aaa/auth-policy PUT /api/v1/node/aaa/auth-policy |
Read Central Config properties |
GET /api/v1/cluster/{cluster-node-id}/node/central-config
GET /api/v1/node/central-config |
Update Central Config properties |
PUT /api/v1/cluster/{cluster-node-id}/node/central-config
PUT /api/v1/node/central-config |
Read node certificate properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/certificate
|
Current configuration for this node |
GET /api/v1/transport-nodes/{transport-node-id}/node/configuration
|
Read edge config diagnosis |
GET /api/v1/transport-nodes/{transport-node-id}/node/diagnosis
|
Read edge diagnosis inconsistency |
GET /api/v1/transport-nodes/{transport-node-id}/node/diagnosis/inconsistency
|
List node files |
GET /api/v1/transport-nodes/{transport-node-id}/node/file-store
GET /api/v1/cluster/{cluster-node-id}/node/file-store GET /api/v1/node/file-store |
Retrieve ssh fingerprint for given remote serverRetrieve ssh fingerprint for a given remote server and port. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store?action=retrieve_ssh_fingerprint
POST /api/v1/cluster/{cluster-node-id}/node/file-store?action=retrieve_ssh_fingerprint POST /api/v1/node/file-store?action=retrieve_ssh_fingerprint |
Create directory in remote file serverCreate a directory on the remote remote server. Supports only SFTP. You must provide the remote server's SSH fingerprint. See the NSX Administration Guide for information and instructions about finding the SSH fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store?action=create_remote_directory
POST /api/v1/cluster/{cluster-node-id}/node/file-store?action=create_remote_directory POST /api/v1/node/file-store?action=create_remote_directory |
Delete file |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}
DELETE /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name} DELETE /api/v1/node/file-store/{file-name} |
Read file properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}
GET /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name} GET /api/v1/node/file-store/{file-name} |
Copy a remote file to the file storeCopy a remote file to the file store. If you use scp or sftp, you must provide the remote server's SSH fingerprint. See the NSX-T Administration Guide for information and instructions about finding the SSH fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}?action=copy_from_remote_file
POST /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name}?action=copy_from_remote_file POST /api/v1/node/file-store/{file-name}?action=copy_from_remote_file |
Upload a file to the file storeWhen you issue this API, the client must specify: - HTTP header Content-Type:application/octet-stream. - Request body with the contents of the file in the filestore. In the CLI, you can view the filestore with the get files command. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}
POST /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name} POST /api/v1/node/file-store/{file-name} |
Copy file in the file store to a remote file storeCopy a file in the file store to a remote server. If you use scp or sftp, you must provide the remote server's SSH fingerprint. See the NSX-T Administration Guide for information and instructions about finding the SSH fingerprint. |
POST /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}?action=copy_to_remote_file
POST /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name}?action=copy_to_remote_file POST /api/v1/node/file-store/{file-name}?action=copy_to_remote_file |
Read file thumbprint |
GET /api/v1/transport-nodes/{transport-node-id}/node/file-store/{file-name}/thumbprint
GET /api/v1/cluster/{cluster-node-id}/node/file-store/{file-name}/thumbprint GET /api/v1/node/file-store/{file-name}/thumbprint |
Get NSX Edge stateful flows |
GET /api/v1/transport-nodes/{transport-node-id}/node/flows
|
Get NSX Edge stateful flows by interface |
GET /api/v1/transport-nodes/{transport-node-id}/node/flows/interfaces/{iface-uuid}
|
Get NSX Edge stateful flows by router |
GET /api/v1/transport-nodes/{transport-node-id}/node/flows/logical-routers/{uuid}
|
Return node GRUB propertiesReturn node GRUB properties. |
GET /api/v1/transport-nodes/{transport-node-id}/node/grub
GET /api/v1/cluster/{cluster-node-id}/node/grub GET /api/v1/node/grub |
Update node GRUB propertiesUpdate node GRUB properties. Note: To update user properties like password, user is expected to use /node/grub/<grub-username>
|
PUT /api/v1/transport-nodes/{transport-node-id}/node/grub
PUT /api/v1/cluster/{cluster-node-id}/node/grub PUT /api/v1/node/grub |
Update node GRUB user propertiesUpdates the GRUB user properties. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/grub/{grub-username}
PUT /api/v1/cluster/{cluster-node-id}/node/grub/{grub-username} PUT /api/v1/node/grub/{grub-username} |
Gets the enable status for Mandatory Access Control |
GET /api/v1/transport-nodes/{transport-node-id}/node/hardening-policy/mandatory-access-control
GET /api/v1/cluster/{cluster-node-id}/node/hardening-policy/mandatory-access-control GET /api/v1/node/hardening-policy/mandatory-access-control |
Enable or disable Mandatory Access Control |
PUT /api/v1/transport-nodes/{transport-node-id}/node/hardening-policy/mandatory-access-control
PUT /api/v1/cluster/{cluster-node-id}/node/hardening-policy/mandatory-access-control PUT /api/v1/node/hardening-policy/mandatory-access-control |
Get the report for Mandatory Access Control |
GET /api/v1/transport-nodes/{transport-node-id}/node/hardening-policy/mandatory-access-control/report
GET /api/v1/cluster/{cluster-node-id}/node/hardening-policy/mandatory-access-control/report GET /api/v1/node/hardening-policy/mandatory-access-control/report |
List available Napp appliance form factorsReturns information about all form factors available for Napp cluster |
GET /api/v1/cluster/{cluster-node-id}/node/intelligence/form-factors
(Deprecated)
GET /api/v1/node/intelligence/form-factors (Deprecated) |
Logical-router diagnosisReturns information of specified logical-router configured on edge. |
GET /api/v1/transport-nodes/{transport-node-id}/node/logical-routers/{logical-router-id}/diagnosis
|
Logical-routers diagnosisReturns information of all logical-routers or specified type of logical-routers configured on edge. |
GET /api/v1/transport-nodes/{transport-node-id}/node/logical-routers/diagnosis
|
List available node logsReturns the number of log files and lists the log files that reside on the NSX virtual appliance. The list includes the filename, file size, and last-modified time in milliseconds since epoch (1 January 1970) for each log file. Knowing the last-modified time with millisecond accuracy since epoch is helpful when you are comparing two times, such as the time of a POST request and the end time on a server. |
GET /api/v1/transport-nodes/{transport-node-id}/node/logs
GET /api/v1/cluster/{cluster-node-id}/node/logs GET /api/v1/node/logs |
Read node log propertiesFor a single specified log file, lists the filename, file size, and last-modified time. |
GET /api/v1/transport-nodes/{transport-node-id}/node/logs/{log-name}
GET /api/v1/cluster/{cluster-node-id}/node/logs/{log-name} GET /api/v1/node/logs/{log-name} |
Read node log contentsFor a single specified log file, returns the content of the log file. This method supports byte-range requests. To request just a portion of a log file, supply an HTTP Range header, e.g. "Range: bytes=<start>-<end>". <end> is optional, and, if omitted, the file contents from start to the end of the file are returned.' |
GET /api/v1/transport-nodes/{transport-node-id}/node/logs/{log-name}/data
GET /api/v1/cluster/{cluster-node-id}/node/logs/{log-name}/data GET /api/v1/node/logs/{log-name}/data |
Get Edge maintenance mode |
GET /api/v1/transport-nodes/{transport-node-id}/node/maintenance-mode
|
Set Edge maintenance mode |
PUT /api/v1/transport-nodes/{transport-node-id}/node/maintenance-mode
|
Delete management plane configuration for this nodeDelete the management plane configuration for this node. |
DELETE /api/v1/cluster/{cluster-node-id}/node/management-plane
DELETE /api/v1/node/management-plane |
Get management plane configuration for this nodeRetrieve the management plane configuration for this node to identify the Manager node with which the controller service is communicating. |
GET /api/v1/cluster/{cluster-node-id}/node/management-plane
(Experimental)
GET /api/v1/node/management-plane (Experimental) |
Update management plane configuration for this nodeUpdate the management plane configuration for this node. |
PUT /api/v1/cluster/{cluster-node-id}/node/management-plane
(Experimental)
PUT /api/v1/node/management-plane (Experimental) |
NodeModeReturns current Node Mode. |
GET /api/v1/cluster/{cluster-node-id}/node/mode
GET /api/v1/node/mode |
Delete MPA configuration for this nodeDelete the MPA configuration for this node. |
DELETE /api/v1/cluster/{cluster-node-id}/node/mpa-config
DELETE /api/v1/node/mpa-config |
Get MPA configuration for this nodeRetrieve the MPA configuration for this node to identify the Manager nodes with which this node is communicating. |
GET /api/v1/cluster/{cluster-node-id}/node/mpa-config
(Experimental)
GET /api/v1/node/mpa-config (Experimental) |
Update MPA configuration for this nodeUpdate the MPA configuration for this node. |
PUT /api/v1/cluster/{cluster-node-id}/node/mpa-config
(Experimental)
PUT /api/v1/node/mpa-config (Experimental) |
Update management plane agent configuration and restart MPA |
PUT /api/v1/transport-nodes/{transport-node-id}/node/mpa-config?action=restart
(Experimental)
|
Read network configuration properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/network
GET /api/v1/cluster/{cluster-node-id}/node/network GET /api/v1/node/network |
List the Node's Network InterfacesReturns the number of interfaces on the node appliance and detailed information about each interface. Interface information includes MTU, broadcast and host IP addresses, link and admin status, MAC address, network mask, and the IP configuration method (static or DHCP). |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/interfaces
GET /api/v1/cluster/{cluster-node-id}/node/network/interfaces GET /api/v1/node/network/interfaces |
Read the Node's Network InterfaceReturns detailed information about the specified interface. Interface information includes MTU, broadcast and host IP addresses, link and admin status, MAC address, network mask, and the IP configuration method. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/interfaces/{interface-id}
GET /api/v1/cluster/{cluster-node-id}/node/network/interfaces/{interface-id} GET /api/v1/node/network/interfaces/{interface-id} |
Update the Node's Network InterfaceUpdates the specified interface properties. You cannot change the properties ip_configuration , ip_addresses , or plane .
NSX Manager must have a static IP address. You must use NSX CLI to configure a controller or an edge node. Note: NSX manager reboot is required after adding IPv6 address. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/network/interfaces/{interface-id}
PUT /api/v1/cluster/{cluster-node-id}/node/network/interfaces/{interface-id} PUT /api/v1/node/network/interfaces/{interface-id} |
Read the Node's Network Interface StatisticsOn the specified interface, returns the number of received (rx), transmitted (tx), and dropped packets; the number of bytes and errors received and transmitted on the interface; and the number of detected collisions. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/interfaces/{interface-id}/stats
GET /api/v1/cluster/{cluster-node-id}/node/network/interfaces/{interface-id}/stats GET /api/v1/node/network/interfaces/{interface-id}/stats |
Read the Node's Name ServersReturns the list of servers that the node uses to look up IP addresses associated with given domain names. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/name-servers
GET /api/v1/cluster/{cluster-node-id}/node/network/name-servers GET /api/v1/node/network/name-servers |
Update the Node's Name ServersModifies the list of servers that the node uses to look up IP addresses associated with given domain names. If DHCP is configured, this method returns a 409 CONFLICT error, because DHCP manages the list of name servers. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/network/name-servers
PUT /api/v1/cluster/{cluster-node-id}/node/network/name-servers PUT /api/v1/node/network/name-servers |
List node network routesReturns detailed information about each route in the node routing table. Routes can be of any type i.e. IPv4 or IPv6 or both. Route information includes the route ipv6 flag (True or False), route type (default, static, and so on), a unique route identifier, the route metric, the protocol from which the route was learned, the route source (which is the preferred egress interface), the route destination, and the route scope. If ipv6 flag is True then route information is for IPv6 route else for IPv4 route. The route scope refers to the distance to the destination network: The "host" scope leads to a destination address on the node, such as a loopback address; the "link" scope leads to a destination on the local network; and the "global" scope leads to addresses that are more than one hop away. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/routes
GET /api/v1/cluster/{cluster-node-id}/node/network/routes GET /api/v1/node/network/routes |
Create node network routeAdd a route to the node routing table. For static routes, the route_type, interface_id, netmask, and destination are required parameters. For default routes, the route_type, gateway address, and interface_id are required. For blackhole routes, the route_type and destination are required. All other parameters are optional. When you add a static route, the scope and route_id are created automatically. When you add a default or blackhole route, the route_id is created automatically. The route_id is read-only, meaning that it cannot be modified. All other properties can be modified by deleting and readding the route. |
POST /api/v1/transport-nodes/{transport-node-id}/node/network/routes
POST /api/v1/cluster/{cluster-node-id}/node/network/routes POST /api/v1/node/network/routes |
Delete node network routeDelete a route from the node routing table. You can modify an existing route by deleting it and then posting the modified version of the route. To verify, remove the route ID from the URI, issue a GET request, and note the absense of the deleted route. |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/network/routes/{route-id}
DELETE /api/v1/cluster/{cluster-node-id}/node/network/routes/{route-id} DELETE /api/v1/node/network/routes/{route-id} |
Read node network routeReturns detailed information about a specified route in the node routing table. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/routes/{route-id}
GET /api/v1/cluster/{cluster-node-id}/node/network/routes/{route-id} GET /api/v1/node/network/routes/{route-id} |
Read the Node's Search DomainsReturns the domain list that the node uses to complete unqualified host names. When a host name does not include a fully qualified domain name (FQDN), the NSX Management node appends the first-listed domain name to the host name before the host name is looked up. The NSX Management node continues this for each entry in the domain list until it finds a match. |
GET /api/v1/transport-nodes/{transport-node-id}/node/network/search-domains
GET /api/v1/cluster/{cluster-node-id}/node/network/search-domains GET /api/v1/node/network/search-domains |
Update the Node's Search DomainsModifies the list of domain names that the node uses to complete unqualified host names. If DHCP is configured, this method returns a 409 CONFLICT error, because DHCP manages the list of name servers. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/network/search-domains
PUT /api/v1/cluster/{cluster-node-id}/node/network/search-domains PUT /api/v1/node/network/search-domains |
List node processesReturns the number of processes and information about each process. Process information includes 1) mem_resident, which is roughly equivalent to the amount of RAM, in bytes, currently used by the process, 2) parent process ID (ppid), 3) process name, 4) process up time in milliseconds, 5) mem_used, wich is the amount of virtual memory used by the process, in bytes, 6) process start time, in milliseconds since epoch, 7) process ID (pid), 8) CPU time, both user and the system, consumed by the process in milliseconds. |
GET /api/v1/transport-nodes/{transport-node-id}/node/processes
GET /api/v1/cluster/{cluster-node-id}/node/processes GET /api/v1/node/processes |
Read node processReturns information for a specified process ID (pid). |
GET /api/v1/transport-nodes/{transport-node-id}/node/processes/{process-id}
GET /api/v1/cluster/{cluster-node-id}/node/processes/{process-id} GET /api/v1/node/processes/{process-id} |
List node servicesReturns a list of all services available on the node applicance. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services
GET /api/v1/cluster/{cluster-node-id}/node/services GET /api/v1/node/services |
Read the Async Replicator service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/async_replicator
GET /api/v1/node/services/async_replicator |
Restart, start or stop the Async Replicator service |
POST /api/v1/cluster/{cluster-node-id}/node/services/async_replicator?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/async_replicator?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/async_replicator?action=stop POST /api/v1/node/services/async_replicator?action=restart POST /api/v1/node/services/async_replicator?action=start POST /api/v1/node/services/async_replicator?action=stop |
Update the async_replicator service properties |
PUT /api/v1/cluster/{cluster-node-id}/node/services/async_replicator
PUT /api/v1/node/services/async_replicator |
Read the Async Replicator service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/async_replicator/status
GET /api/v1/node/services/async_replicator/status |
Read auth service propertiesRead auth service properties. |
GET /api/v1/cluster/{cluster-node-id}/node/services/auth
GET /api/v1/node/services/auth |
Restart the auth service |
POST /api/v1/cluster/{cluster-node-id}/node/services/auth?action=restart
POST /api/v1/node/services/auth?action=restart |
Stop the auth service |
POST /api/v1/cluster/{cluster-node-id}/node/services/auth?action=stop
POST /api/v1/node/services/auth?action=stop |
Start the auth service |
POST /api/v1/cluster/{cluster-node-id}/node/services/auth?action=start
POST /api/v1/node/services/auth?action=start |
Update auth service propertiesUpdate auth service properties. |
PUT /api/v1/cluster/{cluster-node-id}/node/services/auth
PUT /api/v1/node/services/auth |
Read auth service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/auth/status
GET /api/v1/node/services/auth/status |
Read cluster boot manager service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/cluster_manager
GET /api/v1/node/services/cluster_manager |
Restart, start or stop the cluster boot manager service |
POST /api/v1/cluster/{cluster-node-id}/node/services/cluster_manager?action=start
POST /api/v1/cluster/{cluster-node-id}/node/services/cluster_manager?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/cluster_manager?action=restart POST /api/v1/node/services/cluster_manager?action=start POST /api/v1/node/services/cluster_manager?action=stop POST /api/v1/node/services/cluster_manager?action=restart |
Read cluster boot manager service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/cluster_manager/status
GET /api/v1/node/services/cluster_manager/status |
Read cm inventory service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/cm-inventory
GET /api/v1/node/services/cm-inventory |
Restart, start or stop the manager service |
POST /api/v1/cluster/{cluster-node-id}/node/services/cm-inventory?action=start
POST /api/v1/cluster/{cluster-node-id}/node/services/cm-inventory?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/cm-inventory?action=restart POST /api/v1/node/services/cm-inventory?action=start POST /api/v1/node/services/cm-inventory?action=stop POST /api/v1/node/services/cm-inventory?action=restart |
Read manager service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/cm-inventory/status
GET /api/v1/node/services/cm-inventory/status |
Read controller service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/controller
GET /api/v1/node/services/controller |
Restart, start or stop the controller service |
POST /api/v1/cluster/{cluster-node-id}/node/services/controller?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/controller?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/controller?action=stop POST /api/v1/node/services/controller?action=restart POST /api/v1/node/services/controller?action=start POST /api/v1/node/services/controller?action=stop |
Read controller server certificate properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/controller/controller-certificate
GET /api/v1/node/services/controller/controller-certificate |
Get the status (Enabled/Disabled) of controller profiler |
GET /api/v1/cluster/{cluster-node-id}/node/services/controller/profiler
GET /api/v1/node/services/controller/profiler |
Enable or disable controller profiler |
PUT /api/v1/cluster/{cluster-node-id}/node/services/controller/profiler
PUT /api/v1/node/services/controller/profiler |
Read controller service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/controller/status
GET /api/v1/node/services/controller/status |
Read NSX EdgeDatapath service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane
|
Restart, start or stop the NSX EdgeDatapath service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane?action=stop |
Update NSX Edge Datapath service properties |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane
|
Get NSX Edge dataplane cpu stats |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/cpu-stats
|
Update NSX Edge dataplane control packets prioritization settingEnable or disable NSX Edge dataplane control packets prioritization. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/ctrl-prio
|
Check dynamic core feature enabled status of NSX Edge dataplaneCheck current status of NSX Edge dataplane dynamic core feature. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/dynamic-core
|
Update NSX Edge dataplane dynamic core feature enabled statusEnable or disable NSX Edge dataplane dynamic core feature. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/dynamic-core
|
Get NSX Edge dataplane flow cache setting |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/flow-cache
|
Update NSX Edge dataplane flow cache settingEnable or disable NSX Edge dataplane flow cache. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/flow-cache
|
Return top 10 flows informationRun flow monitor for timeout seconds for all or certain CPU core(s) and return top 10 flows. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/flow-mon
|
Start NSX Edge dataplane flow monitorStarts NSX Edge dataplane flow monitor on all or certain CPU core(s) with a timeout. Stops flow monitor after timeout and dumps the flow file on local file store on edge. If top_10 argument is set to true top 10 flows are collected, else all flows are collected. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/flow-mon
|
Get NSX Edge dataplane firewall connections |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/fw-conns
|
Get NSX Edge dataplane firewall stats |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/fw-stats
|
Get NSX Edge dataplane geneve cbit setting |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/geneve-cbit
|
Update NSX Edge dataplane geneve cbit settingEnable or disable NSX Edge dataplane geneve critical bit. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/geneve-cbit
|
Update NSX Edge dataplane interrupt mode settingEnable or disable NSX Edge dataplane interrupt mode. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/intr-mode
|
Get NSX Edge dataplane l2vpn pmtu message generation setting |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/l2vpn-pmtu
|
Update NSX Edge dataplane l2vpn pmtu message generation settingEnable or disable NSX Edge dataplane pmtu cache in l2vpn. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/l2vpn-pmtu
|
Depreciated. Please use /node/services/dataplane/pmtu-learning |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/l3vpn-pmtu
(Deprecated)
|
Depreciated. Please use /node/services/dataplane/pmtu-learning |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/l3vpn-pmtu
(Deprecated)
|
Get NSX Edge dataplane pmtu learning setting |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/pmtu-learning
|
Update NSX Edge dataplane pmtu learning settingEnable or disable NSX Edge dataplane pmtu learning |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/pmtu-learning
|
Update NSX Edge dataplane QAT feature enabled statusEnable or disable NSX Edge dataplane QAT feature. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/qat-enable
|
Get NSX Edge dataplane QAT setting |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/qat-status
|
Get NSX Edge rx and tx queue number per port per coreGet NSX Edge rx and tx queue number per port per core. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/queue-num-per-port-per-core
|
Set NSX Edge rx and tx queue number per port per coreSet NSX Edge rx and tx queue number per port per core. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/queue-num-per-port-per-core
|
Return rx/tx ring size information |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/ring-size
|
Set NSX Edge rx ring size for physical portsSet NSX Edge rx ring size for physical ports. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/rx-ring-size
|
Read NSX EdgeDatapath service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/status
|
Set NSX Edge tx ring size for physical portsSet NSX Edge tx ring size for physical ports. Dataplane service must be restarted for the change to take effect. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/tx-ring-size
|
Check UPT mode enabled status of NSX Edge dataplaneCheck current status of NSX Edge dataplane UPT mode. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dataplane/upt-mode
|
Read the Corfu Server service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore
GET /api/v1/node/services/datastore |
Restart, start or stop the Corfu Server service |
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/datastore?action=stop POST /api/v1/node/services/datastore?action=restart POST /api/v1/node/services/datastore?action=start POST /api/v1/node/services/datastore?action=stop |
Get the status of Corfu Certificate Expiry Check. Enabled or disabled |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore/corfu_cert_expiry_check
GET /api/v1/node/services/datastore/corfu_cert_expiry_check |
Enable or Disable Corfu Certificate Expiry Check. Default is enabled |
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore/corfu_cert_expiry_check?action=enable
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore/corfu_cert_expiry_check?action=disable POST /api/v1/node/services/datastore/corfu_cert_expiry_check?action=enable POST /api/v1/node/services/datastore/corfu_cert_expiry_check?action=disable |
Read the Corfu Server service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore/status
GET /api/v1/node/services/datastore/status |
Read the Corfu Log Replication Server service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore_log_replication
GET /api/v1/node/services/datastore_log_replication |
Restart, start or stop the Corfu Log Replication Server service |
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore_log_replication?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore_log_replication?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/datastore_log_replication?action=stop POST /api/v1/node/services/datastore_log_replication?action=restart POST /api/v1/node/services/datastore_log_replication?action=start POST /api/v1/node/services/datastore_log_replication?action=stop |
Read the Corfu Log Replication Server service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore_log_replication/status
GET /api/v1/node/services/datastore_log_replication/status |
Read the Corfu Nonconfig Server service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore_nonconfig
GET /api/v1/node/services/datastore_nonconfig |
Restart, start or stop the Corfu Nonconfig Server service |
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore_nonconfig?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/datastore_nonconfig?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/datastore_nonconfig?action=stop POST /api/v1/node/services/datastore_nonconfig?action=restart POST /api/v1/node/services/datastore_nonconfig?action=start POST /api/v1/node/services/datastore_nonconfig?action=stop |
Read the Corfu Nonconfig Server service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/datastore_nonconfig/status
GET /api/v1/node/services/datastore_nonconfig/status |
Read NSX Edge DHCP service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dhcp
|
Update NSX Edge DHCP service properties |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/dhcp
|
Read NSX Edge DHCP service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dhcp/status
|
Read NSX Edge Dispatcher service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dispatcher
|
Restart, start or stop the NSX Edge Dispatcher service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/dispatcher?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/dispatcher?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/dispatcher?action=stop |
Read NSX Edge Dispatcher service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/dispatcher/status
|
Read NSX Edge Docker service propertiesRead the Docker service process properties from Edge. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/docker
|
Read NSX Edge Docker service statusChecks the status of dockerd process on the Edge. If dockerd process is running, returns "running", returns "stopped" otherwise. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/docker/status
|
Read http service propertiesRead http service properties. To read fields deprecated in this API, checkout API GET /api/v1/cluster/api-service. |
GET /api/v1/cluster/{cluster-node-id}/node/services/http
GET /api/v1/node/services/http |
Update http service certificateApplies a security certificate to the http service. In the POST request, the CERTIFICATE_ID references a certificate created with the /api/v1/trust-management APIs. If the certificate used is a CA signed certificate,the request fails if the whole chain(leaf, intermediate, root) is not imported. |
POST /api/v1/cluster/{cluster-node-id}/node/services/http?action=apply_certificate
(Deprecated)
POST /api/v1/node/services/http?action=apply_certificate (Deprecated) |
Stop the http service |
POST /api/v1/cluster/{cluster-node-id}/node/services/http?action=stop
POST /api/v1/node/services/http?action=stop |
Start the http service |
POST /api/v1/cluster/{cluster-node-id}/node/services/http?action=start
POST /api/v1/node/services/http?action=start |
Restart the http service |
POST /api/v1/cluster/{cluster-node-id}/node/services/http?action=restart
POST /api/v1/node/services/http?action=restart |
Update http service propertiesUpdate http service properties. To update fields deprecated in this API, checkout API PUT /api/v1/cluster/api-service. |
PUT /api/v1/cluster/{cluster-node-id}/node/services/http
PUT /api/v1/node/services/http |
Read http service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/http/status
GET /api/v1/node/services/http/status |
Read the idps-reporting service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/idps-reporting
GET /api/v1/node/services/idps-reporting |
Restart, start or stop the idps-reporting service |
POST /api/v1/cluster/{cluster-node-id}/node/services/idps-reporting?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/idps-reporting?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/idps-reporting?action=stop POST /api/v1/node/services/idps-reporting?action=restart POST /api/v1/node/services/idps-reporting?action=start POST /api/v1/node/services/idps-reporting?action=stop |
Read the idps-reporting service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/idps-reporting/status
GET /api/v1/node/services/idps-reporting/status |
Read NSX install-upgrade service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade
GET /api/v1/node/services/install-upgrade |
Restart, start or stop the NSX install-upgrade service |
POST /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade?action=stop POST /api/v1/node/services/install-upgrade?action=restart POST /api/v1/node/services/install-upgrade?action=start POST /api/v1/node/services/install-upgrade?action=stop |
Update NSX install-upgrade service properties |
PUT /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade
PUT /api/v1/node/services/install-upgrade |
Read NSX install-upgrade service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade/status
GET /api/v1/node/services/install-upgrade/status |
Update UC state properties |
PUT /api/v1/cluster/{cluster-node-id}/node/services/install-upgrade/uc-state
PUT /api/v1/node/services/install-upgrade/uc-state |
Read NSX Edge Ipsec VPN service propertiesRead the IPsec VPN service process properties from Edge. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/ipsecvpn
|
Update NSX Edge Ipsec VPN service properties |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/ipsecvpn
|
Read NSX Edge Ipsec VPN service statusChecks the status of iked process on the Edge. If iked process is running, returns "running", returns "stopped" otherwise. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/ipsecvpn/status
|
Read liagent service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/liagent
GET /api/v1/cluster/{cluster-node-id}/node/services/liagent GET /api/v1/node/services/liagent |
Restart, start or stop the liagent service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/liagent?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/liagent?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/liagent?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/liagent?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/liagent?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/liagent?action=stop POST /api/v1/node/services/liagent?action=restart POST /api/v1/node/services/liagent?action=start POST /api/v1/node/services/liagent?action=stop |
Read liagent service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/liagent/status
GET /api/v1/cluster/{cluster-node-id}/node/services/liagent/status GET /api/v1/node/services/liagent/status |
Read NSX Edge NSXA service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/local-controller
|
Restart, start or stop the NSX EdgeNSXA service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/local-controller?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/local-controller?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/local-controller?action=stop |
Update NSX Edge NSXA service properties |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/local-controller
|
Read NSX EdgeNSXA service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/local-controller/status
|
Read service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/manager
GET /api/v1/node/services/manager |
Reset the logging levels to default values |
POST /api/v1/cluster/{cluster-node-id}/node/services/manager?action=reset-manager-logging-levels
POST /api/v1/node/services/manager?action=reset-manager-logging-levels |
Restart, start or stop the service |
POST /api/v1/cluster/{cluster-node-id}/node/services/manager?action=start
POST /api/v1/cluster/{cluster-node-id}/node/services/manager?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/manager?action=restart POST /api/v1/node/services/manager?action=start POST /api/v1/node/services/manager?action=stop POST /api/v1/node/services/manager?action=restart |
Update service properties |
PUT /api/v1/cluster/{cluster-node-id}/node/services/manager
PUT /api/v1/node/services/manager |
Read service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/manager/status
GET /api/v1/node/services/manager/status |
Read NSX Messaging Manager service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/messaging-manager
GET /api/v1/node/services/messaging-manager |
Restart, start or stop the NSX Messaging Manager service |
POST /api/v1/cluster/{cluster-node-id}/node/services/messaging-manager?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/messaging-manager?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/messaging-manager?action=stop POST /api/v1/node/services/messaging-manager?action=restart POST /api/v1/node/services/messaging-manager?action=start POST /api/v1/node/services/messaging-manager?action=stop |
Read NSX Messaging Manager service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/messaging-manager/status
GET /api/v1/node/services/messaging-manager/status |
Read Metadata-proxy service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/metadata-proxy
|
Read Metadata-proxy service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/metadata-proxy/status
|
Read migration coordinator service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/migration-coordinator
GET /api/v1/node/services/migration-coordinator |
Restart, start or stop the migration coordinator service |
POST /api/v1/cluster/{cluster-node-id}/node/services/migration-coordinator?action=start
POST /api/v1/cluster/{cluster-node-id}/node/services/migration-coordinator?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/migration-coordinator?action=restart POST /api/v1/node/services/migration-coordinator?action=start POST /api/v1/node/services/migration-coordinator?action=stop POST /api/v1/node/services/migration-coordinator?action=restart |
Read migration coordinator service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/migration-coordinator/status
GET /api/v1/node/services/migration-coordinator/status |
Read NSX Nestdb service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nestdb
|
Restart, start or stop the NSX Nestdb service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nestdb?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nestdb?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/nestdb?action=stop |
Read NSX Nestdb service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nestdb/status
|
Read appliance management service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt
GET /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt GET /api/v1/node/services/node-mgmt |
Restart the node management service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt?action=restart POST /api/v1/node/services/node-mgmt?action=restart |
Retrieve Node Management loglevel |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt/loglevel
GET /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt/loglevel GET /api/v1/node/services/node-mgmt/loglevel |
Set Node Management loglevel |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt/loglevel
PUT /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt/loglevel PUT /api/v1/node/services/node-mgmt/loglevel |
Read appliance management service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/node-mgmt/status
GET /api/v1/cluster/{cluster-node-id}/node/services/node-mgmt/status GET /api/v1/node/services/node-mgmt/status |
Read NSX node-stats service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/node-stats
GET /api/v1/node/services/node-stats |
Restart, start or stop the NSX node-stats service |
POST /api/v1/cluster/{cluster-node-id}/node/services/node-stats?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/node-stats?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/node-stats?action=stop POST /api/v1/node/services/node-stats?action=restart POST /api/v1/node/services/node-stats?action=start POST /api/v1/node/services/node-stats?action=stop |
Read NSX node-stats service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/node-stats/status
GET /api/v1/node/services/node-stats/status |
Read NSX Control Plane Agent service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-control-plane-agent
|
Restart, start or stop the NSX Control Plane Agent service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-control-plane-agent?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-control-plane-agent?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-control-plane-agent?action=stop |
Read NSX Control Plane Agent service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-control-plane-agent/status
|
Read NSX Message Bus service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-message-bus
GET /api/v1/node/services/nsx-message-bus |
Restart, start or stop the NSX Message Bus service |
POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-message-bus?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-message-bus?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-message-bus?action=stop POST /api/v1/node/services/nsx-message-bus?action=restart POST /api/v1/node/services/nsx-message-bus?action=start POST /api/v1/node/services/nsx-message-bus?action=stop |
Read NSX Message Bus service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-message-bus/status
GET /api/v1/node/services/nsx-message-bus/status |
Read NSX OpsAgent service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-opsagent
|
Restart, start or stop the NSX OpsAgent service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-opsagent?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-opsagent?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-opsagent?action=stop |
Read NSX OpsAgent service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-opsagent/status
|
Read NSX Platform Client service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client GET /api/v1/node/services/nsx-platform-client |
Restart, start or stop the NSX Platform Client service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client?action=stop POST /api/v1/node/services/nsx-platform-client?action=restart POST /api/v1/node/services/nsx-platform-client?action=start POST /api/v1/node/services/nsx-platform-client?action=stop |
Read NSX Platform Client service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-platform-client/status
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-platform-client/status GET /api/v1/node/services/nsx-platform-client/status |
Read NSX upgrade Agent service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-upgrade-agent
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-upgrade-agent GET /api/v1/node/services/nsx-upgrade-agent |
Restart, start or stop the NSX upgrade agent service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-upgrade-agent?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-upgrade-agent?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-upgrade-agent?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-upgrade-agent?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-upgrade-agent?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/nsx-upgrade-agent?action=stop POST /api/v1/node/services/nsx-upgrade-agent?action=restart POST /api/v1/node/services/nsx-upgrade-agent?action=start POST /api/v1/node/services/nsx-upgrade-agent?action=stop |
Read Nsx upgrade agent service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/nsx-upgrade-agent/status
GET /api/v1/cluster/{cluster-node-id}/node/services/nsx-upgrade-agent/status GET /api/v1/node/services/nsx-upgrade-agent/status |
Read NTP service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/ntp
GET /api/v1/cluster/{cluster-node-id}/node/services/ntp GET /api/v1/node/services/ntp |
Restart, start or stop the NTP service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ntp?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ntp?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/ntp?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/ntp?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/ntp?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/ntp?action=stop POST /api/v1/node/services/ntp?action=restart POST /api/v1/node/services/ntp?action=start POST /api/v1/node/services/ntp?action=stop |
Update NTP service properties |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/ntp
PUT /api/v1/cluster/{cluster-node-id}/node/services/ntp PUT /api/v1/node/services/ntp |
Read NTP service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/ntp/status
GET /api/v1/cluster/{cluster-node-id}/node/services/ntp/status GET /api/v1/node/services/ntp/status |
Read service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/policy
(Deprecated)
GET /api/v1/node/services/policy (Deprecated) |
Reset the logging levels to default values |
POST /api/v1/cluster/{cluster-node-id}/node/services/policy?action=reset-manager-logging-levels
(Deprecated)
POST /api/v1/node/services/policy?action=reset-manager-logging-levels (Deprecated) |
Restart, start or stop the service |
POST /api/v1/cluster/{cluster-node-id}/node/services/policy?action=start
(Deprecated)
POST /api/v1/cluster/{cluster-node-id}/node/services/policy?action=stop (Deprecated) POST /api/v1/cluster/{cluster-node-id}/node/services/policy?action=restart (Deprecated) POST /api/v1/node/services/policy?action=start (Deprecated) POST /api/v1/node/services/policy?action=stop (Deprecated) POST /api/v1/node/services/policy?action=restart (Deprecated) |
Update service properties |
PUT /api/v1/cluster/{cluster-node-id}/node/services/policy
(Deprecated)
PUT /api/v1/node/services/policy (Deprecated) |
Read service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/policy/status
(Deprecated)
GET /api/v1/node/services/policy/status (Deprecated) |
Read NSX EdgeMSR service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/router
|
Read NSX EdgeMSRConfig service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/router-config
|
Read NSX EdgeMSRConfig service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/router-config/status
|
Read NSX EdgeMSR service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/router/status
|
Read NSX Search service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/search
GET /api/v1/node/services/search |
Restart, start or stop the NSX Search service |
POST /api/v1/cluster/{cluster-node-id}/node/services/search?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/search?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/search?action=stop POST /api/v1/node/services/search?action=restart POST /api/v1/node/services/search?action=start POST /api/v1/node/services/search?action=stop |
Read NSX Search service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/search/status
GET /api/v1/node/services/search/status |
Read NSX security-hub service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/security-hub
|
Read NSX Edge security-hub service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/security-hub/status
|
Read the Site Manager service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/site_manager
GET /api/v1/node/services/site_manager |
Restart, start or stop the Site Manager service |
POST /api/v1/cluster/{cluster-node-id}/node/services/site_manager?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/site_manager?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/site_manager?action=stop POST /api/v1/node/services/site_manager?action=restart POST /api/v1/node/services/site_manager?action=start POST /api/v1/node/services/site_manager?action=stop |
Read the Site Manager service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/site_manager/status
GET /api/v1/node/services/site_manager/status |
Read SNMP service propertiesRead SNMP service properties. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/snmp
GET /api/v1/cluster/{cluster-node-id}/node/services/snmp GET /api/v1/node/services/snmp |
Restart, start or stop the SNMP service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/snmp?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/snmp?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/snmp?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/snmp?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/snmp?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/snmp?action=stop POST /api/v1/node/services/snmp?action=restart POST /api/v1/node/services/snmp?action=start POST /api/v1/node/services/snmp?action=stop |
Update SNMP service propertiesUpdate SNMP service properties. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/snmp
PUT /api/v1/cluster/{cluster-node-id}/node/services/snmp PUT /api/v1/node/services/snmp |
Read SNMP service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/snmp/status
GET /api/v1/cluster/{cluster-node-id}/node/services/snmp/status GET /api/v1/node/services/snmp/status |
Read SNMP V3 Engine ID |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/snmp/v3-engine-id
GET /api/v1/cluster/{cluster-node-id}/node/services/snmp/v3-engine-id GET /api/v1/node/services/snmp/v3-engine-id |
Update SNMP V3 Engine ID |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/snmp/v3-engine-id
PUT /api/v1/cluster/{cluster-node-id}/node/services/snmp/v3-engine-id PUT /api/v1/node/services/snmp/v3-engine-id |
Read ssh service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/ssh
GET /api/v1/cluster/{cluster-node-id}/node/services/ssh GET /api/v1/node/services/ssh |
Restart, start or stop the ssh service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh?action=start
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh?action=stop POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/ssh?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/ssh?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/ssh?action=restart POST /api/v1/node/services/ssh?action=start POST /api/v1/node/services/ssh?action=stop POST /api/v1/node/services/ssh?action=restart |
Remove a host's fingerprint from known hosts file |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh?action=remove_host_fingerprint
POST /api/v1/cluster/{cluster-node-id}/node/services/ssh?action=remove_host_fingerprint POST /api/v1/node/services/ssh?action=remove_host_fingerprint |
Update ssh service propertiesUpdate ssh service properties. If the start_on_boot property is updated to true, existing ssh sessions if any are stopped and the ssh service is restarted. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/ssh
PUT /api/v1/cluster/{cluster-node-id}/node/services/ssh PUT /api/v1/node/services/ssh |
Restart, start or stop the ssh service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh/notify_mpa?action=start
POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh/notify_mpa?action=stop POST /api/v1/transport-nodes/{transport-node-id}/node/services/ssh/notify_mpa?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/ssh/notify_mpa?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/ssh/notify_mpa?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/ssh/notify_mpa?action=restart POST /api/v1/node/services/ssh/notify_mpa?action=start POST /api/v1/node/services/ssh/notify_mpa?action=stop POST /api/v1/node/services/ssh/notify_mpa?action=restart |
Read ssh service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/ssh/status
GET /api/v1/cluster/{cluster-node-id}/node/services/ssh/status GET /api/v1/node/services/ssh/status |
Read syslog service properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/syslog
GET /api/v1/cluster/{cluster-node-id}/node/services/syslog GET /api/v1/node/services/syslog |
Restart, start or stop the syslog service |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/syslog?action=restart
POST /api/v1/transport-nodes/{transport-node-id}/node/services/syslog?action=start POST /api/v1/transport-nodes/{transport-node-id}/node/services/syslog?action=stop POST /api/v1/cluster/{cluster-node-id}/node/services/syslog?action=restart POST /api/v1/cluster/{cluster-node-id}/node/services/syslog?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/syslog?action=stop POST /api/v1/node/services/syslog?action=restart POST /api/v1/node/services/syslog?action=start POST /api/v1/node/services/syslog?action=stop |
Delete all node syslog exportersRemoves all syslog exporter rules. |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/exporters
DELETE /api/v1/cluster/{cluster-node-id}/node/services/syslog/exporters DELETE /api/v1/node/services/syslog/exporters |
List node syslog exportersReturns the collection of registered syslog exporter rules, if any. The rules specify the collector IP address and port, and the protocol to use. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/exporters
GET /api/v1/cluster/{cluster-node-id}/node/services/syslog/exporters GET /api/v1/node/services/syslog/exporters |
Verify node syslog exporterCollect iptables rules needed for all existing syslog exporters and verify if the existing iptables rules are the same. If not, remove the stale rules and add the new rules to make sure all exporters work properly. |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/exporters?action=verify
POST /api/v1/cluster/{cluster-node-id}/node/services/syslog/exporters?action=verify POST /api/v1/node/services/syslog/exporters?action=verify |
Add node syslog exporterAdds a rule for exporting syslog information to a specified server. The required parameters are the rule name (exporter_name); severity level (emerg, alert, crit, and so on); transmission protocol (TCP or UDP); and server IP address or hostname. The optional parameters are the syslog port number, which can be 1 through 65,535 (514, by default); facility level to use when logging messages to syslog (kern, user, mail, and so on); and message IDs (msgids), which identify the types of messages to export. |
POST /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/exporters
POST /api/v1/cluster/{cluster-node-id}/node/services/syslog/exporters POST /api/v1/node/services/syslog/exporters |
Delete node syslog exporterRemoves a specified rule from the collection of syslog exporter rules. |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/exporters/{exporter-name}
DELETE /api/v1/cluster/{cluster-node-id}/node/services/syslog/exporters/{exporter-name} DELETE /api/v1/node/services/syslog/exporters/{exporter-name} |
Read node syslog exporterReturns information about a specific syslog collection point. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/exporters/{exporter-name}
GET /api/v1/cluster/{cluster-node-id}/node/services/syslog/exporters/{exporter-name} GET /api/v1/node/services/syslog/exporters/{exporter-name} |
Read syslog service status |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/syslog/status
GET /api/v1/cluster/{cluster-node-id}/node/services/syslog/status GET /api/v1/node/services/syslog/status |
Read Telemetry service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/telemetry
GET /api/v1/node/services/telemetry |
Restart, start or stop Telemetry service |
POST /api/v1/cluster/{cluster-node-id}/node/services/telemetry?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/telemetry?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/telemetry?action=stop POST /api/v1/node/services/telemetry?action=restart POST /api/v1/node/services/telemetry?action=start POST /api/v1/node/services/telemetry?action=stop |
Reset the logging levels to default values |
POST /api/v1/cluster/{cluster-node-id}/node/services/telemetry?action=reset-telemetry-logging-levels
POST /api/v1/node/services/telemetry?action=reset-telemetry-logging-levels |
Update Telemetry service properties |
PUT /api/v1/cluster/{cluster-node-id}/node/services/telemetry
PUT /api/v1/node/services/telemetry |
Read Telemetry service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/telemetry/status
GET /api/v1/node/services/telemetry/status |
Read ui service properties |
GET /api/v1/cluster/{cluster-node-id}/node/services/ui-service
GET /api/v1/node/services/ui-service |
Restart, Start and Stop the ui service |
POST /api/v1/cluster/{cluster-node-id}/node/services/ui-service?action=restart
POST /api/v1/cluster/{cluster-node-id}/node/services/ui-service?action=start POST /api/v1/cluster/{cluster-node-id}/node/services/ui-service?action=stop POST /api/v1/node/services/ui-service?action=restart POST /api/v1/node/services/ui-service?action=start POST /api/v1/node/services/ui-service?action=stop |
Read ui service status |
GET /api/v1/cluster/{cluster-node-id}/node/services/ui-service/status
GET /api/v1/node/services/ui-service/status |
Get NSX Edge IPSec Determ RSS settingDisplays configured value for IPSec VPN Deteriministic ESP RSS and also shows either vmxnet driver supports this feature or not. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/vpn/ipsec/deterministic-esp-rss
|
Update NSX Edge IPSec Determ RSS settingEnable or disable NSX Edge IPSec Determ RSS. Deterministically queue ESP packets to CPU queues, to achieve higher throughout. To enable this feature, vmxnet driver version should be beyond 7 and more. |
PUT /api/v1/transport-nodes/{transport-node-id}/node/services/vpn/ipsec/deterministic-esp-rss
|
Get NSX Edge IPSec Determ RSS settingDisplays configured value for IPSec VPN Deteriministic ESP RSS and also shows either vmxnet driver supports this feature or not. |
GET /api/v1/transport-nodes/{transport-node-id}/node/services/vpn/ipsec/deterministic-esp-rss/status
|
Read node statusReturns information about the node appliance's file system, CPU, memory, disk usage, and uptime. |
GET /api/v1/transport-nodes/{transport-node-id}/node/status
GET /api/v1/cluster/{cluster-node-id}/node/status GET /api/v1/node/status |
Update node statusClear node bootup status |
POST /api/v1/transport-nodes/{transport-node-id}/node/status?action=clear_bootup_error
POST /api/v1/cluster/{cluster-node-id}/node/status?action=clear_bootup_error POST /api/v1/node/status?action=clear_bootup_error |
List appliance management tasks |
GET /api/v1/transport-nodes/{transport-node-id}/node/tasks
GET /api/v1/cluster/{cluster-node-id}/node/tasks GET /api/v1/node/tasks |
Delete task |
DELETE /api/v1/transport-nodes/{transport-node-id}/node/tasks/{task-id}
DELETE /api/v1/cluster/{cluster-node-id}/node/tasks/{task-id} DELETE /api/v1/node/tasks/{task-id} |
Read task properties |
GET /api/v1/transport-nodes/{transport-node-id}/node/tasks/{task-id}
GET /api/v1/cluster/{cluster-node-id}/node/tasks/{task-id} GET /api/v1/node/tasks/{task-id} |
Cancel specified task |
POST /api/v1/transport-nodes/{transport-node-id}/node/tasks/{task-id}?action=cancel
POST /api/v1/cluster/{cluster-node-id}/node/tasks/{task-id}?action=cancel POST /api/v1/node/tasks/{task-id}?action=cancel |
Read asynchronous task response |
GET /api/v1/transport-nodes/{transport-node-id}/node/tasks/{task-id}/response
GET /api/v1/cluster/{cluster-node-id}/node/tasks/{task-id}/response GET /api/v1/node/tasks/{task-id}/response |
List node usersReturns the list of users configured to log in to the NSX appliance. |
GET /api/v1/transport-nodes/{transport-node-id}/node/users
GET /api/v1/transport-nodes/{transport-node-id}/node/users?internal=true GET /api/v1/cluster/{cluster-node-id}/node/users GET /api/v1/cluster/{cluster-node-id}/node/users?internal=true GET /api/v1/node/users GET /api/v1/node/users?internal=true |
Create node usersCreate new user account to log in to the NSX web-based user interface or access API. username is required field in case of creating new user, further following
usernames - root, admin, audit are reserved and can not be used
to create new user account unless for local audit user. In case of local audit account when username not specified in request by default account will be created with audit username, although administrators
are allowed to use any other non-duplicate usernames during creation. |
POST /api/v1/cluster/{cluster-node-id}/node/users?action=create_user
POST /api/v1/cluster/{cluster-node-id}/node/users?action=create_audit_user POST /api/v1/node/users?action=create_user POST /api/v1/node/users?action=create_audit_user |
Reset a user's own password. Requires current passwordEnables a user to reset their own password. |
POST /api/v1/cluster/{cluster-node-id}/node/users?action=reset_own_password
POST /api/v1/node/users?action=reset_own_password |
Delete node userDelete specified user who is configured to log in to the NSX appliance. Whereas local users root and administrator are not allowed to be deleted, but local user audit is deletable on-demand. Caution, users deleted from following node types cannot be recovered,
|
DELETE /api/v1/transport-nodes/{transport-node-id}/node/users/{userid}
DELETE /api/v1/cluster/{cluster-node-id}/node/users/{userid} DELETE /api/v1/node/users/{userid} |
Read node userReturns information about a specified user who is configured to log in to the NSX appliance. The valid user IDs are: 0, 10000, 10002 or other users managed by administrators. |
GET /api/v1/transport-nodes/{transport-node-id}/node/users/{userid}
GET /api/v1/cluster/{cluster-node-id}/node/users/{userid} GET /api/v1/node/users/{userid} |
Activate a user account with a passwordActivates the account for this user. When an account is successfully activated, the "status" field in the response is "ACTIVE". This API is not supported for userid 0 and userid 10000. |
POST /api/v1/cluster/{cluster-node-id}/node/users/{userid}?action=activate
POST /api/v1/node/users/{userid}?action=activate |
Reset a user's password without requiring their current passwordUnlike the PUT version of this call (PUT /node/users/<userid>), this API does not require that the current password for the user be provided. The account of the target user must be "ACTIVE" for the call to succeed. This API is not supported for userid 0 and userid 10000. |
POST /api/v1/cluster/{cluster-node-id}/node/users/{userid}?action=reset_password
POST /api/v1/node/users/{userid}?action=reset_password |
Deactivate a user accountDeactivates the account for this user. Deactivating an account is permanent, unlike an account that is temporarily locked because of too many password failures. A deactivated account has to be explicitly activated. When an account is successfully deactivated, the "status" field in the response is "NOT_ACTIVATED". This API is not supported for userid 0 and userid 10000. |
POST /api/v1/cluster/{cluster-node-id}/node/users/{userid}?action=deactivate
POST /api/v1/node/users/{userid}?action=deactivate |
Update node userUpdates attributes of an existing NSX appliance user. This method
The specified password does not meet the following (default) complexity requirements:
the configured password complexity may vary as per defined Authentication and Password policies, which shall be available at: [GET]: /api/v1/node/aaa/auth-policy The valid user IDs are: 0, 10000, 10002 or other users managed by administrators.
|
PUT /api/v1/transport-nodes/{transport-node-id}/node/users/{userid}
PUT /api/v1/cluster/{cluster-node-id}/node/users/{userid} PUT /api/v1/node/users/{userid} |
List SSH keys from authorized_keys file for node userReturns a list of all SSH keys from authorized_keys file for node user |
GET /api/v1/transport-nodes/{transport-node-id}/node/users/{userid}/ssh-keys
GET /api/v1/cluster/{cluster-node-id}/node/users/{userid}/ssh-keys GET /api/v1/node/users/{userid}/ssh-keys |
Remove SSH public key from authorized_keys file for node user |
POST /api/v1/transport-nodes/{transport-node-id}/node/users/{userid}/ssh-keys?action=remove_ssh_key
POST /api/v1/cluster/{cluster-node-id}/node/users/{userid}/ssh-keys?action=remove_ssh_key POST /api/v1/node/users/{userid}/ssh-keys?action=remove_ssh_key |
Add SSH public key to authorized_keys file for node user |
POST /api/v1/transport-nodes/{transport-node-id}/node/users/{userid}/ssh-keys?action=add_ssh_key
POST /api/v1/cluster/{cluster-node-id}/node/users/{userid}/ssh-keys?action=add_ssh_key POST /api/v1/node/users/{userid}/ssh-keys?action=add_ssh_key |
Read node version |
GET /api/v1/transport-nodes/{transport-node-id}/node/version
GET /api/v1/cluster/{cluster-node-id}/node/version GET /api/v1/node/version |
Clean up all nvds upgrade related configurationsThis API needs to be invoked before another precheck and upgrade is requested. This will clean up precheck configuration, vds topology from last request. |
POST /api/v1/nvds-urt?action=cleanup
(Deprecated)
|
Set the migrate status key of ExtraConfigProfile of all Transport Nodes to IGNORE |
POST /api/v1/nvds-urt?action=ignore_migrate_status
(Deprecated)
|
Retrieve latest precheck ID of the N-VDS to VDS migration |
GET /api/v1/nvds-urt/precheck
(Deprecated)
|
Start precheck for N-VDS to VDS migrationPrecheck is peformed at a global level across all NVDSes present in the system. It is expected to check the status once the precheck API is invoked via GetNvdsUpgradeReadinessCheckSummary API. If NVDS configuration like HostSwitchProfiles differs across TransportNodes, precheck will fail and status will be FAILED and error will be reported via the status API. Once the reported errors are fixed, precheck API is expected to be invoked again to rerun precheck. Once NVDS configuration is consistent across all TransportNodes, precheck will pass and a topology will be generated and status will be PENDING_TOPOLOGY. Generated toplogy can be retrieved via GetRecommendedVdsTopology API. User can apply the recommended topology via SetTargetVdsTopology API. |
POST /api/v1/nvds-urt/precheck
(Deprecated)
|
Retrieve latest precheck ID of the N-VDS to VDS migration for the cluster |
GET /api/v1/nvds-urt/precheck-by-cluster/{cluster_id}
(Deprecated)
|
Start precheck for N-VDS to VDS migration by cluster |
POST /api/v1/nvds-urt/precheck-by-cluster/{cluster_id}
(Deprecated)
|
Start precheck for N-VDS to VDS migration by clusters |
POST /api/v1/nvds-urt/precheck-by-clusters
(Deprecated)
|
Get summary of N-VDS to VDS migration |
GET /api/v1/nvds-urt/status-summary-by-cluster/{precheck-id}
(Deprecated)
|
Get summary of N-VDS to VDS migrationProvides overall status for Precheck as well as actual NVDS to CVDS upgrade status per host. Precheck statuses are as follows 1. IN_PROGRESS: Precheck is in progress 2. FAILED: Precheck is failed, error can be found in precheck_issue field 3. PENDING_TOPOLOGY: Precheck is successful, recommended topology is generated 4. APPLYING_TOPOLOGY: SetTargetToplogy is in progress 5. APPLY_TOPOLOGY_FAILED: SetTargetTopology is failed 6. REDAY: SetTargetTopology is successful and TransportNodes provided as part of topology are ready for upgrade from NVDS to CVDS |
GET /api/v1/nvds-urt/status-summary/{precheck-id}
(Deprecated)
|
Unset VDS configuration and remove it from vCenterThis will revert corresponding VDS to PENDING_TOPOLOGY state. User can revert the entire topology all together or can revert partially depending on which TrasportNodes user does not want to upgrade to VDS. |
POST /api/v1/nvds-urt/topology?action=revert
(Deprecated)
|
Set VDS configuration and create it in vCenterUpon successful preheck status goes to PENDING_TOPOLOGY and global recommended topology is generated which can be retrieved via GetRecommendedVdsTopology API. User can apply the entire recommeneded topology all together or can apply partial depending on which TrasportNodes user wants to be upgraded from NVDS to CVDS. User can change system generated vds_name field, all other fields cannot be changed when applying topology. |
POST /api/v1/nvds-urt/topology?action=apply
(Deprecated)
|
Recommmended topology |
GET /api/v1/nvds-urt/topology-by-cluster/{precheck-id}
(Deprecated)
|
Set VDS configuration and create it in vCenter |
POST /api/v1/nvds-urt/topology-by-cluster/{precheck-id}?action=apply
(Deprecated)
|
Recommmended topologyThis returns global recommended topology generated when precheck is successful. |
GET /api/v1/nvds-urt/topology/{precheck-id}
(Deprecated)
|
Get PCG registration payload |
GET /api/v1/pcg-registration-payload
|
Returns list of configured IP address blocks.Returns information about configured IP address blocks. Information includes the id, display name, description & CIDR of IP address blocks |
GET /api/v1/pools/ip-blocks
(Deprecated)
|
Create a new IP address block.Creates a new IPv4 address block using the specified cidr. cidr is a required parameter. display_name & description are optional parameters |
POST /api/v1/pools/ip-blocks
(Deprecated)
|
Delete an IP Address BlockDeletes the IP address block with specified id if it exists. IP block cannot be deleted if there are allocated subnets from the block. |
DELETE /api/v1/pools/ip-blocks/{block-id}
(Deprecated)
|
Get IP address block information.Returns information about the IP address block with specified id. Information includes id, display_name, description & cidr. |
GET /api/v1/pools/ip-blocks/{block-id}
(Deprecated)
|
Update an IP Address BlockModifies the IP address block with specifed id. display_name, description and cidr are parameters that can be modified. If a new cidr is specified, it should contain all existing subnets in the IP block. Returns a conflict error if the IP address block cidr can not be modified due to the presence of subnets that it contains. Eg: If the IP block contains a subnet 192.168.0.1/24 and we try to change the IP block cidr to 10.1.0.1/16, it results in a conflict. |
PUT /api/v1/pools/ip-blocks/{block-id}
(Deprecated)
|
List IP PoolsReturns information about the configured IP address pools. Information includes the display name and description of the pool and the details of each of the subnets in the pool, including the DNS servers, allocation ranges, gateway, and CIDR subnet address. |
GET /api/v1/pools/ip-pools
|
Create an IP PoolCreates a new IPv4 or IPv6 address pool. Required parameters are allocation_ranges and cidr. Optional parameters are display_name, description, dns_nameservers, dns_suffix, and gateway_ip. |
POST /api/v1/pools/ip-pools
|
Delete an IP PoolDeletes the specified IP address pool. By default, if the IpPool is used in other configurations (such as transport node template), it won't be deleted. In such situations, pass "force=true" as query param to force delete the IpPool |
DELETE /api/v1/pools/ip-pools/{pool-id}
|
Read IP PoolReturns information about the specified IP address pool. |
GET /api/v1/pools/ip-pools/{pool-id}
|
Allocate or Release an IP Address from a PoolAllocates or releases an IP address from the specified IP pool. To allocate an address, include ?action=ALLOCATE in the request and "allocation_id":null in the request body. When the request is successful, the response is "allocation_id": "<ip-address>", where <ip-address> is an IP address from the specified pool. To release an IP address (return it back to the pool), include ?action=RELEASE in the request and "allocation_id":<ip-address> in the request body, where <ip-address> is the address to be released. When the request is successful, the response is NULL. Tags, display_name and description attributes are not supported for AllocationIpAddress in this release. |
POST /api/v1/pools/ip-pools/{pool-id}
|
Update an IP PoolModifies the specified IP address pool. Modifiable parameters include the description, display_name, and all subnet information. |
PUT /api/v1/pools/ip-pools/{pool-id}
|
List IP Pool AllocationsReturns information about which addresses have been allocated from a specified IP address pool. |
GET /api/v1/pools/ip-pools/{pool-id}/allocations
|
List subnets within an IP blockReturns information about all subnets present within an IP address block. Information includes subnet's id, display_name, description, cidr and allocation ranges. |
GET /api/v1/pools/ip-subnets
|
Create subnet of specified size within an IP blockCarves out a subnet of requested size from the specified IP block. The "size" parameter and the "block_id " are the requireds field while invoking this API. If the IP block has sufficient resources/space to allocate a subnet of specified size, the response will contain all the details of the newly created subnet including the display_name, description, cidr & allocation_ranges. Returns a conflict error if the IP block does not have enough resources/space to allocate a subnet of the requested size. |
POST /api/v1/pools/ip-subnets
(Deprecated)
|
Delete subnet within an IP blockDeletes a subnet with specified id within a given IP address block. Deletion is allowed only when there are no allocated IP addresses from that subnet. |
DELETE /api/v1/pools/ip-subnets/{subnet-id}
(Deprecated)
|
Get the subnet within an IP blockReturns information about the subnet with specified id within a given IP address block. Information includes display_name, description, cidr and allocation_ranges. |
GET /api/v1/pools/ip-subnets/{subnet-id}
(Deprecated)
|
Allocate or Release an IP Address from a Ip SubnetAllocates or releases an IP address from the specified IP subnet. To allocate an address, include ?action=ALLOCATE in the request and a "{}" in the request body. When the request is successful, the response is "allocation_id": "<ip-address>", where <ip-address> is an IP address from the specified pool. To release an IP address (return it back to the pool), include ?action=RELEASE in the request and "allocation_id":<ip-address> in the request body, where <ip-address> is the address to be released. When the request is successful, the response is NULL. |
POST /api/v1/pools/ip-subnets/{subnet-id}
(Deprecated)
|
List MAC PoolsReturns a list of all the MAC pools |
GET /api/v1/pools/mac-pools
|
Read MAC PoolReturns information about the specified MAC pool. |
GET /api/v1/pools/mac-pools/{pool-id}
|
List VNI PoolsReturns information about the default and configured virtual network identifier (VNI) pools for use when building logical network segments. Each virtual network has a unique ID called a VNI. Instead of creating a new VNI each time you need a new logical switch, you can instead allocate a VNI from a VNI pool. VNI pools are sometimes called segment ID pools. Each VNI pool has a range of usable VNIs. By default, there is one pool with two ranges [5000, 65535] and [65536, 75000]. To create multiple smaller pools, specify a smaller range for each pool such as 75001-75100 and 75101-75200. The VNI range determines the maximum number of logical switches that can be created in each network segment. |
GET /api/v1/pools/vni-pools
|
Create a new VNI Pool.Creates a new VNI pool using the specified VNI pool range. The range should be non-overlapping with an existing range. If the range in payload is present or overlaps with an existing range, return code 400 with bad request and an error message is returned mentioning that the given range overlaps with an existing range. |
POST /api/v1/pools/vni-pools
|
Delete a VNI PoolDeletes the given VNI pool. |
DELETE /api/v1/pools/vni-pools/{pool-id}
|
Read VNI PoolReturns information about the specified virtual network identifier (VNI) pool. |
GET /api/v1/pools/vni-pools/{pool-id}
|
Update a VNI PoolUpdates the specified VNI pool. Modifiable parameters include description, display_name and ranges. Ranges can be added, modified or deleted. Overlapping ranges are not allowed. Only range end can be modified for any existing range. Range shrinking or deletion is not allowed if there are any allocated VNIs. |
PUT /api/v1/pools/vni-pools/{pool-id}
|
List virtual tunnel endpoint Label PoolsReturns a list of all virtual tunnel endpoint label pools |
GET /api/v1/pools/vtep-label-pools
|
Read a virtual tunnel endpoint label poolReturns information about the specified virtual tunnel endpoint label pool. |
GET /api/v1/pools/vtep-label-pools/{pool-id}
|
Gets the realization state barrier configurationReturns the current barrier configuration |
GET /api/v1/realization-state-barrier/config
|
Updates the barrier configurationUpdates the barrier configuration having interval set in milliseconds The new interval that automatically increments the global realization number |
PUT /api/v1/realization-state-barrier/config
|
Gets the current barrier numberReturns the current global realization barrier number for NSX. |
GET /api/v1/realization-state-barrier/current
(Deprecated)
|
Increments the barrier count by 1Increment the current barrier number by 1 for NSX. |
POST /api/v1/realization-state-barrier/current?action=increment
(Deprecated)
|
Get list of bundle-ids which are available in repository or in-progressGet list of bundle-ids which are available in repository or in-progress |
GET /api/v1/repository/bundles
|
Upload bundle using remote fileUpload the bundle from remote bundle URL. The call returns after fetch is initiated. Check status by periodically retrieving bundle upload status using GET /repository/bundles/<bundle-id>/upload-status. The upload is complete when the status is SUCCESS. |
POST /api/v1/repository/bundles
|
Upload bundleUpload the bundle. This call returns after upload is completed. You can check bundle processing status periodically by retrieving bundle upload-status to find out if the upload and processing is completed. |
POST /api/v1/repository/bundles?action=upload
|
Cancel bundle uploadCancel upload of bundle. This API works only when bundle upload is in-progress and will not work during post-processing of bundle. If bundle upload is in-progress, then the API call returns http OK response after cancelling the upload and deleting partially uploaded bundle. |
POST /api/v1/repository/bundles/{bundle-id}?action=cancel_upload
|
Get bundle upload statusGet uploaded bundle upload status |
GET /api/v1/repository/bundles/{bundle-id}/upload-status
|
Get information of the OVF which will be getting deployed.Get information of the OVF for specified appliance which is present in repository and will be used to deploy new VM. |
GET /api/v1/repository/bundles/ovf-deploy-info
|
Checks bundle upload permissionsChecks whether bundle upload is allowed on given node for given appliance. There are different kinds of checks for different appliances. Some of the checks for Intelligence appliance are as follows: 1. Is bundle upload-allowed on given node 2. Is bundle upload already in-progress |
GET /api/v1/repository/bundles/upload-allowed
|
Get the site configuration |
GET /api/v1/sites
|
Get the site configuration, some attributes won't be shown based on version |
GET /api/v1/sites?version=3.0.2
GET /api/v1/sites?version=3.1.0 GET /api/v1/sites?version=latest |
Get the compatibility list of the siteReturns the version of this site and list of compatible versions |
GET /api/v1/sites/compatibility
|
Check whether the remote site version is compatible to this siteReturns the version of this site and list of compatible versions for both local and remote site, also a boolean indicating whether the two are compatible, this value is true if one of the site version is in the compatibility list of the other site |
GET /api/v1/sites/compatibility/remote
|
Get the local site configuration |
GET /api/v1/sites/self
|
Get overall status of the federation, including stub status |
GET /api/v1/sites/status
|
Get the switchover status |
GET /api/v1/sites/switchover-status
|
Fetch the policy partial patch configuration value.Get Configuration values for nsx-partial-patch. By default partial patch is disabled (i.e false). |
GET /policy/api/v1/system-config/nsx-partial-patch-config
|
Saves the configuration for policy partial patchUpdate partial patch configuration values. Only boolean value is allowed for enable_partial_patch |
PATCH /policy/api/v1/system-config/nsx-partial-patch-config
|
Get information about all tasks |
GET /policy/api/v1/tasks
GET /api/v1/tasks |
Get information about the specified task |
GET /policy/api/v1/tasks/{task-id}
GET /api/v1/tasks/{task-id} |
Get the response of a task |
GET /policy/api/v1/tasks/{task-id}/response
GET /api/v1/tasks/{task-id}/response |
List Transport Node collectionsReturns all Transport Node collections |
GET /api/v1/transport-node-collections
|
Create transport node collection by attaching Transport Node Profile to cluster.When transport node collection is created the hosts which are part of compute collection will be prepared automatically i.e. NSX Manager attempts to install the NSX components on hosts. Transport nodes for these hosts are created using the configuration specified in transport node profile. |
POST /api/v1/transport-node-collections
|
Detach transport node profile from compute collection.By deleting transport node collection, we are detaching the transport node profile(TNP) from the compute collection. It has no effect on existing transport nodes. However, new hosts added to the compute collection will no longer be automatically converted to NSX transport node. Detaching TNP from compute collection does not delete TNP. |
DELETE /api/v1/transport-node-collections/{transport-node-collection-id}
|
Get Transport Node collection by idReturns transport node collection by id |
GET /api/v1/transport-node-collections/{transport-node-collection-id}
|
Retry the process on applying transport node profileThis API is relevant for compute collection on which vLCM is enabled. This API shpuld be invoked to retry the realization of transport node profile on the compute collection. This is useful when profile realization had failed because of error in vLCM. This API has no effect if vLCM is not enabled on the computer collection. |
POST /api/v1/transport-node-collections/{transport-node-collection-id}?action=retry_profile_realization
|
Update Transport Node collectionAttach different transport node profile to compute collection by updating transport node collection. |
PUT /api/v1/transport-node-collections/{transport-node-collection-id}
|
Get Transport Node collection application stateReturns the state of transport node collection based on the states of transport nodes of the hosts which are part of compute collection. |
GET /api/v1/transport-node-collections/{transport-node-collection-id}/state
|
List Transport NodesReturns information about all transport node profiles. |
GET /api/v1/transport-node-profiles
(Deprecated)
|
Create a Transport Node ProfileTransport node profile captures the configuration needed to create a transport node. A transport node profile can be attached to compute collections for automatic TN creation of member hosts. |
POST /api/v1/transport-node-profiles
(Deprecated)
|
Delete a Transport Node ProfileDeletes the specified transport node profile. A transport node profile can be deleted only when it is not attached to any compute collection. |
DELETE /api/v1/transport-node-profiles/{transport-node-profile-id}
(Deprecated)
|
Get a Transport NodeReturns information about a specified transport node profile. |
GET /api/v1/transport-node-profiles/{transport-node-profile-id}
(Deprecated)
|
Update a Transport Node ProfileWhen configurations of a transport node profile(TNP) is updated, all the transport nodes in all the compute collections to which this TNP is attached are updated to reflect the updated configuration. |
PUT /api/v1/transport-node-profiles/{transport-node-profile-id}
(Deprecated)
|
List Transport NodesReturns information about all transport nodes along with underlying host or edge details. A transport node is a host or edge that contains hostswitches. A hostswitch can have virtual machines connected to them. Because each transport node has hostswitches, transport nodes can also have virtual tunnel endpoints, which means that they can be part of the overlay. |
GET /api/v1/transport-nodes
|
Create a Transport NodeTransport nodes are hypervisor hosts and NSX Edges that will participate in an NSX-T overlay. For a hypervisor host, this means that it hosts VMs that will communicate over NSX-T logical switches. For NSX Edges, this means that it will have logical router uplinks and downlinks. This API creates transport node for a host node (hypervisor) or edge node (router) in the transport network. When you run this command for a host, NSX Manager attempts to install the NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the installation to succeed, you must provide the host login credentials and the host thumbprint. To get the ESXi host thumbprint, SSH to the host and run the openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout command. To generate host key thumbprint using SHA-256 algorithm please follow the steps below. Log into the host, making sure that the connection is not vulnerable to a man in the middle attack. Check whether a public key already exists. Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'. If the key is not present then generate a new key by running the following command and follow the instructions. ssh-keygen -t rsa Now generate a SHA256 hash of the key using the following command. Please make sure to pass the appropriate file name if the public key is stored with a different file name other than the default 'id_rsa.pub'. awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64 Additional documentation on creating a transport node can be found in the NSX-T Installation Guide. In order for the transport node to forward packets, the host_switch_spec property must be specified. Host switches (called bridges in OVS on KVM hypervisors) are the individual switches within the host virtual switch. Virtual machines are connected to the host switches. When creating a transport node, you need to specify if the host switches are already manually preconfigured on the node, or if NSX should create and manage the host switches. You specify this choice by the type of host switches you pass in the host_switch_spec property of the TransportNode request payload. For a KVM host, you can preconfigure the host switch, or you can have NSX Manager perform the configuration. For an ESXi host or NSX Edge node, NSX Manager always configures the host switch. To preconfigure the host switches on a KVM host, pass an array of PreconfiguredHostSwitchSpec objects that describes those host switches. In the current NSX-T release, only one prefonfigured host switch can be specified. See the PreconfiguredHostSwitchSpec schema definition for documentation on the properties that must be provided. Preconfigured host switches are only supported on KVM hosts, not on ESXi hosts or NSX Edge nodes. To allow NSX to manage the host switch configuration on KVM hosts, ESXi hosts, or NSX Edge nodes, pass an array of StandardHostSwitchSpec objects in the host_switch_spec property, and NSX will automatically create host switches with the properties you provide. In the current NSX-T release, up to 16 host switches can be automatically managed. See the StandardHostSwitchSpec schema definition for documentation on the properties that must be provided. Note: Previous versions of NSX-T also used a property named transport_zone_endpoints at TransportNode level. This property is deprecated which creates some combinations of new client along with old client payloads. Examples [1] & [2] show old/existing client request and response by populating transport_zone_endpoints property at TransportNode level. Example [3] shows TransportNode creation request/response by populating transport_zone_endpoints property at StandardHostSwitch level and other new properties. The request should either provide node_deployement_info or node_id. If the host node (hypervisor) or edge node (router) is already added in system then it can be converted to transport node by providing node_id in request. If host node (hypervisor) or edge node (router) is not already present in system then information should be provided under node_deployment_info. |
POST /api/v1/transport-nodes
|
Clear edge transport node stale entriesEdge transport node maintains its entry in many internal tables. In some cases a few of these entries might not get cleaned up during edge transport node deletion. This api cleans up any stale entries that may exist in the internal tables that store the Edge Transport Node data. |
POST /api/v1/transport-nodes?action=clean_stale_entries
|
Add or update deployment references of edge VM.Populates placement references for edge node registered with identifier VM and manage lifecycle operations like edit and delete of the specified edge VM. This internal API may be used to convert a manually deployed edge VM into an NSX lifecycle-managed edge VM. The edge VM must be reachable from NSX Manager. NSX Manager fetches live configuration from the edge and vCenter Server and reports the values in GET API for the following configuration. NSX Manager fetches the following settings from the edge - hostname, NTP servers, syslog servers, DNS servers, search domains and SSH. NSX Manager fetches the following configuration from vCenter Server - storage, networks, compute cluster, resource allocations and reservation for CPU and memory. NSX Manager saves fields that are not refreshed from sources external sources, from the request payload itself. Fields include login credentials, resource pool, static IP address, network interfaces with static port attachments and advanced configuration section. If these fields are configured on the edge and not specified in request payload, then the converted edge would have gaps as compared to an NSX Manager lifecycle-managed edge deployed with this configuration. Any gaps in configuration would feature when consequent lifecycle operations are performed. |
POST /api/v1/transport-nodes/{node-id}?action=addOrUpdatePlacementReferences
|
Redeploys a new node that replaces the specified edge node.Redeploys an edge node at NSX Manager that replaces the edge node with identifier <node-id>. If NSX Manager can access the specified edge node, then the node is put into maintenance mode and then the associated VM is deleted. This is a means to reset all configuration on the edge node. The communication channel between NSX Manager and edge is established after this operation. |
POST /api/v1/transport-nodes/{node-id}?action=redeploy
|
Get the module details of a transport node |
GET /api/v1/transport-nodes/{node-id}/modules
(Deprecated)
|
Get high-level summary of a transport node |
GET /api/v1/transport-nodes/{node-id}/pnic-bond-status
|
Read status of all transport nodes with tunnel connections to transport node |
GET /api/v1/transport-nodes/{node-id}/remote-transport-node-status
|
Read status of a transport node |
GET /api/v1/transport-nodes/{node-id}/status
|
List of tunnels |
GET /api/v1/transport-nodes/{node-id}/tunnels
|
Tunnel properties |
GET /api/v1/transport-nodes/{node-id}/tunnels/{tunnel-name}
|
Invoke DELETE request on target transport node |
DELETE /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Invoke GET request on target transport node |
GET /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Invoke POST request on target transport node |
POST /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Invoke PUT request on target transport node |
PUT /api/v1/transport-nodes/{target-node-id}/{target-uri}
|
Delete a Transport NodeDeletes the specified transport node. Query param force can be used to force delete the host nodes. Force deletion of edge and public cloud gateway nodes is not supported. Force delete is not supported if transport node is part of a cluster on which Transport node profile is applied. If transport node delete is called with query param force not being set or set to false and uninstall of NSX components in the host fails, TransportNodeState object will be retained. If transport node delete is called with query param force set to true and uninstall of NSX components in the host fails, TransportNodeState object will be deleted. It also removes the specified node (host or edge) from system. If unprepare_host option is set to false, then host will be deleted without uninstalling the NSX components from the host. |
DELETE /api/v1/transport-nodes/{transport-node-id}
|
Get a Transport NodeReturns information about a specified transport node. |
GET /api/v1/transport-nodes/{transport-node-id}
|
Apply cluster level Transport Node Profile on overridden hostA host can be overridden to have different configuration than Transport Node Profile(TNP) on cluster. This action will restore such overridden host back to cluster level TNP. This API can be used in other case. When TNP is applied to a cluster, if any validation fails (e.g. VMs running on host) then existing transport node (TN) is not updated. In that case after the issue is resolved manually (e.g. VMs powered off), you can call this API to update TN as per cluster level TNP. |
POST /api/v1/transport-nodes/{transport-node-id}?action=restore_cluster_config
(Deprecated)
|
Enable flow cache for an edge transport nodeEnable flow cache for edge transport node. Caution: This involves restart of the edge dataplane and hence may lead to network disruption. |
POST /api/v1/transport-nodes/{transport-node-id}?action=enable_flow_cache
|
Refresh the node configuration for the Edge node.The API is applicable for Edge transport nodes. If you update the edge configuration and find a discrepancy in Edge configuration at NSX Manager in compare with realized, then use this API to refresh configuration at NSX Manager. It refreshes the Edge configuration from sources external to NSX Manager like vSphere Server or the Edge node CLI. After this action, Edge configuration at NSX Manager is updated and the API GET api/v1/transport-nodes will show refreshed data. From 3.2 release onwards, refresh API updates the MP intent by default. |
POST /api/v1/transport-nodes/{transport-node-id}?action=refresh_node_configuration&resource_type=EdgeNode
|
Restart the inventory sync for the node if it is paused currently.Restart the inventory sync for the node if it is currently internally paused. After this action the next inventory sync coming from the node is processed. |
POST /api/v1/transport-nodes/{transport-node-id}?action=restart_inventory_sync
|
Disable flow cache for an edge transport nodeDisable flow cache for edge transport node. Caution: This involves restart of the edge dataplane and hence may lead to network disruption. |
POST /api/v1/transport-nodes/{transport-node-id}?action=disable_flow_cache
|
Update a Transport NodeModifies the transport node information. The host_switch_name field must match the host_switch_name value specified in the transport zone (API: transport-zones). You must create the associated uplink profile (API: host-switch-profiles) before you can specify an uplink_name here. If the host is an ESX and has only one physical NIC being used by a vSphere standard switch, TransportNodeUpdateParameters should be used to migrate the management interface and the physical NIC into a logical switch that is in a transport zone this transport node will join or has already joined. If the migration is already done, TransportNodeUpdateParameters can also be used to migrate the management interface and the physical NIC back to a vSphere standard switch. In other cases, the TransportNodeUpdateParameters should NOT be used. When updating transport node you should follow pattern where you should fetch the existing transport node and then only modify the required properties keeping other properties as is. It also modifies attributes of node (host or edge). Note: Previous versions of NSX-T also used a property named transport_zone_endpoints at TransportNode level. This property is deprecated which creates some combinations of new client along with old client payloads. Examples [1] shows old/existing client request and response by populating transport_zone_endpoints property at TransportNode level. Example [2] shows TransportNode updating TransportNode from exmaple [1] request/response by adding a new StandardHostSwitch by populating transport_zone_endpoints at StandardHostSwitch level. TransportNode level transport_zone_endpoints will ONLY have TransportZoneEndpoints that were originally specified here during create/update operation and does not include TransportZoneEndpoints that were directly specified at StandardHostSwitch level. If api response is 200 OK, user will have to wait for config to get realised, and realisation of the intent can be tracked using /api/v1/transport-nodes/ |
PUT /api/v1/transport-nodes/{transport-node-id}
|
Return the list of capabilities of transport nodeReturns information about capabilities of transport host node. Edge nodes do not have capabilities. |
GET /api/v1/transport-nodes/{transport-node-id}/capabilities
|
List the specified transport node's network interfacesReturns the number of interfaces on the node and detailed information about each interface. Interface information includes MTU, broadcast and host IP addresses, link and admin status, MAC address, network mask, and the IP configuration method (static or DHCP). |
GET /api/v1/transport-nodes/{transport-node-id}/network/interfaces
|
Read the transport node's network interfaceReturns detailed information about the specified interface. Interface information includes MTU, broadcast and host IP addresses, link and admin status, MAC address, network mask, and the IP configuration method (static or DHCP). |
GET /api/v1/transport-nodes/{transport-node-id}/network/interfaces/{interface-id}
|
Read counters for transport node interfaces.This API returns the counters of the specified interface. The counters reset on reboot or redeploy of the appliance or restart of the data plane. NSX Manager polls the transport-node every minute (by default) to update the data returned on this API. If you need near realtime values, use the query parameter \"?source=realtime\" to the API and it will make NSX Manager collect the statistics from the transport node and returns the updated counters. |
GET /api/v1/transport-nodes/{transport-node-id}/network/interfaces/{interface-id}/stats
|
Get a Transport Node's StateReturns information about the current state of the transport node configuration and information about the associated hostswitch. |
GET /api/v1/transport-nodes/{transport-node-id}/state
|
Resync a Transport NodeResync the TransportNode configuration on a host. It is similar to updating the TransportNode with existing configuration, but force synce these configurations to the host (no backend optimizations). |
POST /api/v1/transport-nodes/{transportnode-id}?action=resync_host_config
|
Update transport node maintenance modePut transport node into maintenance mode or exit from maintenance mode. |
POST /api/v1/transport-nodes/{transportnode-id}
|
List transport nodes by realized stateReturns a list of transport node states that have realized state as provided as query parameter. If this API is called multiple times in parallel then it will fail with error indicating that another request is already in progress. In such case, try the API on another NSX manager instance (if exists) or try again after some time. |
GET /api/v1/transport-nodes/state
|
Get high-level summary of all transport nodes. The service layer does not support source = realtime or cached. |
GET /api/v1/transport-nodes/status
|
List Transport ZonesReturns information about configured transport zones. NSX requires at least one transport zone. NSX uses transport zones to provide connectivity based on the topology of the underlying network, trust zones, or organizational separations. For example, you might have hypervisors that use one network for management traffic and a different network for VM traffic. This architecture would require two transport zones. The combination of transport zones plus transport connectors enables NSX to form tunnels between hypervisors. Transport zones define which interfaces on the hypervisors can communicate with which other interfaces on other hypervisors to establish overlay tunnels or provide connectivity to a VLAN. A logical switch can be in one (and only one) transport zone. This means that all of a switch's interfaces must be in the same transport zone. However, each hypervisor virtual switch (OVS or VDS) has multiple interfaces (connectors), and each connector can be attached to a different logical switch. For example, on a single hypervisor with two connectors, connector A can be attached to logical switch 1 in transport zone A, while connector B is attached to logical switch 2 in transport zone B. In this way, a single hypervisor can participate in multiple transport zones. The API for creating a transport zone requires that a single host switch be specified for each transport zone, and multiple transport zones can share the same host switch. |
GET /api/v1/transport-zones
(Deprecated)
|
Create a Transport ZoneCreates a new transport zone. The required parameters is transport_type (OVERLAY or VLAN). The optional parameters are description and display_name. |
POST /api/v1/transport-zones
(Deprecated)
|
Delete a Transport ZoneDeletes an existing transport zone. |
DELETE /api/v1/transport-zones/{zone-id}
(Deprecated)
|
Get a Transport ZoneReturns information about a single transport zone. |
GET /api/v1/transport-zones/{zone-id}
(Deprecated)
|
Update a Transport ZoneUpdates an existing transport zone. Modifiable parameters are is_default, description, and display_name. |
PUT /api/v1/transport-zones/{zone-id}
(Deprecated)
|
Get high-level summary of a transport zone |
GET /api/v1/transport-zones/{zone-id}/status
|
Get a Transport Zone's Current Runtime Status InformationReturns information about a specified transport zone, including the number of logical switches in the transport zone, number of logical spitch ports assigned to the transport zone, and number of transport nodes in the transport zone. |
GET /api/v1/transport-zones/{zone-id}/summary
(Deprecated)
|
Read status of transport nodes in a transport zone |
GET /api/v1/transport-zones/{zone-id}/transport-node-status
|
Creates a status report of transport nodes in a transport zoneYou must provide the request header "Accept:application/octet-stream" when calling this API. |
GET /api/v1/transport-zones/{zone-id}/transport-node-status-report
|
Creates a status json report of transport nodes in a transport zone |
GET /api/v1/transport-zones/{zone-id}/transport-node-status-report-json
|
Get high-level summary of a transport zone. The service layer does not support source = realtime or cached. |
GET /api/v1/transport-zones/status
|
Read status of all the transport nodes |
GET /api/v1/transport-zones/transport-node-status
|
Creates a status report of transport nodes of all the transport zonesYou must provide the request header "Accept:application/octet-stream" when calling this API. |
GET /api/v1/transport-zones/transport-node-status-report
|
Creates a status json report of transport nodes of all the transport zones |
GET /api/v1/transport-zones/transport-node-status-report-json
|
List transport zone profilesReturns information about the configured transport zone profiles. Transport zone profiles define networking policies for transport zones and transport zone endpoints. |
GET /api/v1/transportzone-profiles
(Deprecated)
|
Create a transport zone ProfileCreates a transport zone profile. The resource_type is required. |
POST /api/v1/transportzone-profiles
(Deprecated)
|
Delete a transport zone ProfileDeletes a specified transport zone profile. |
DELETE /api/v1/transportzone-profiles/{transportzone-profile-id}
(Deprecated)
|
Get transport zone profile by identifierReturns information about a specified transport zone profile. |
GET /api/v1/transportzone-profiles/{transportzone-profile-id}
(Deprecated)
|
Update a transport zone profileModifies a specified transport zone profile. The body of the PUT request must include the resource_type. |
PUT /api/v1/transportzone-profiles/{transportzone-profile-id}
(Deprecated)
|
Reset IPSec VPN session statisticsReset IPSec VPN session statistics |
POST /api/v1/vpn/ipsec/sessions/{sessionid}/statistics?action=reset
|
Delete deployment information.This is an API called by VCF to delete deployment information. |
DELETE /api/v1/watermark
|
Get deployment information.This is an API called by VCF to get deployment information. |
GET /api/v1/watermark
|
Create or update deployment information.This is an API called by VCF to store or update deployment information. |
POST /api/v1/watermark
|
Create or update deployment information.This is an API called by VCF to update stored deployment information. |
PUT /api/v1/watermark
|