Nsx t что это
Перейти к содержимому

Nsx t что это

Nsx t что это

Home / Cloud Computing / NSX-V vs NSX-T: Discover the Key Differences

Modern data centers use physical servers with hypervisors to run virtual machines. This way, virtualization brought cost-effectiveness and enhanced scalability. With virtualized network solutions, such as VMware’s NSX software, the concept of networking acquired a new meaning.

The software-defined networking model, or SDN, provides all the necessary features to create complex network configurations. SDN provides high security and enables seamless scaling without extensive physical network changes due to its agile and flexible nature.

This article examines what VMware NSX is and the differences between VMware NSX-V and VMware NSX-T.

NSX v vs NSX t differences

VMware NSX

In the virtualization market space, VMware is one of the biggest names. The company offers an array of products for virtual workstations, network virtualization, and security platforms.

VMware NSX is an advanced SDN solution with two variants:

  • NSX-V
  • NSX-T

What is VMware NSX?

NSX is a software-defined networking solution VMware created to tackle the rapid growth of data center networking needs. The main function is to provide virtualized network creation and management via NSX Manager, a component of NSX Data Center. Additionally, VMware NSX provides crucial network security features to ensure the virtual environment is safe.

With NSX, no changes to the underlying hardware are necessary. Network provisioning and connection between virtual networks are abstracted from the hardware, similar to how VMs work. NSX virtualizes switches, routers, firewalls, and other network services from Layer 2 to Layer 7 of the OSI model.

NSX OSI Layer 2 to Layer 7 support

This process saves time, effort, and costs, enabling large companies to centralize and automate network management and control. Depending on the product you chose, VMware NSX is suitable for all infrastructure types:

  • Multi-cloud environments
  • On-premises data centers
  • Containerized workloads

NSX diagram representation

Read on below to learn the differences between NSX-V and NSX-T.

What is NSX-V?

NSX-V architecture (V as in vSphere) features deployment reconfiguration, rapid provisioning, and destruction of on-demand virtual networks. This product depends on VMware vSphere and connection with vCenter. Once NSX-V pairs with vCenter, the integration with vSphere is seamless.

The downside is that you cannot have multiple vCenters with NSX-V. The question arises: what if I have more than one vCenter? In that case, you must have multiple NSX managers, which can be challenging in multi-cloud environments. The maintenance can become cumbersome and demanding. As companies transition to the cloud for handling workloads, the VMware NSX-V solution proves unsuitable for many clients. Now more of a legacy VMware product, NSX-V is being replaced by NSX-T.

What is NSX-T?

VMware NSX-Transformers is a successor to NSX-V and offers options to build a highly agile SDN infrastructure. The product brings network virtualization to bare-metal and containerized workloads, multi-cloud, and multi-hypervisor environments. NSX-T supports cloud-native applications, network virtualization stack for OpenStack, Kubernetes, KVM, Docker, etc.

Many consider NSX-T as an upgrade of NSX-V. It is true that the main idea behind both products is the same. However, VMware built NSX-T from the ground up, and what lies under the hood differs completely from NSX-V.

The main benefit is that NSX-T can be deployed in heterogeneous environments with many different components. NSX-T is not under the confines of the vCenter deployment. You can have multiple vCenter servers and use one NSX manager as a single pane of glass for controlling your virtual network. Or, you do not have to deploy a vCenter at all. Instead, you can choose ESXi as the operating system in the NSX-T GUI, for example.

NSX-T UI operating system selection for transport node.

NSX-V Vs. NSX-T Interface

There is no dedicated NSX-V interface labeled by that name. With NSX-V installation comes a plugin that integrates into vCenter Server. Therefore, to access the GUI options for NSX-V, you log in to your vSphere instance and select the Networking and Security option.

For example, this is the section in the vSphere Web Client for NSX-V:

NSX-V UI for networking and security option

If this option does not appear in your GUI, then the account you use does not have the necessary permissions. Only the user that connected NSX Manager to vCenter Server gets the Enterprise Administrator access. All other accounts have service rights only and need to be assigned proper permissions.

NSX-T has its own interface, decoupled from vCenter, called NSX-T Manager. This means you do not have to log in to vCenter to manage your NSX-T architecture.

When you log in to the NSX-T manager, the home page shows the networking, security, system, and inventory overview, alarms, etc.

NSX-T Manager interface

From the NSX-T Manager Web Client, you can view your vCenters or add new to pull inventory. This is where NSX-T outshines NSX-V with the ability to manage network and security automation across multiple vCenters.

NSX Main Components

The primary components of VMware NSX are:

  • NSX Manager
  • NSX Controller
  • NSX Edge

NSX Manager

NSX Manager is the primary component used to configure, manage, and monitor all NSX components via a GUI. With the REST APIs, the configuration and object manipulation are consistent.

NSX-V APIs and NSX-T APIs are not the same.

NSX Manager lets you deploy controllers, edge distributed routers, and generate certificates. In NSX-V, the NSX Manager works with one vCenter Server as one virtual machine. In the case of NSX-T, the NSX-T Manager can be connected to multiple vCenters as a KVM or ESXi VM, for example.

NSX Edge

VMware NSX Edge Services Gateway (ESG) is a part of VMware vCenter Server and provides essential gateway services such as:

  • DHCP
  • NAT
  • VPN
  • Load balancing
  • Perimeter firewall
  • Dynamic routing

NSX Edge enables the connection between isolated and shared networks and direct connection between VMs. Besides the east-west routing, this service allows tenants to reach public networks by providing the north-south connection. Using NSX Edge lets you create virtual boundaries for your workloads, components, and tenants.

NSX Controllers

NSX Controller is a distributed state management system and the central control spot for logical switching and routing. The controllers contain all information about hosts, VMs, distributed logical routers, and VXLANs.

This virtual appliance is crucial for installing and monitoring scalable and highly available virtual networks. No matter the size of a software-defined data center, VMware requires to have exactly three controllers in NSX-T to achieve the necessary redundancy.

NSX Controller uses a Secure Sockets Layer connection to connect to ESXi hosts and SSL API to interact with NSX Manager.

NSX Controller diagram

Features of NSX

Both NSX-V and NSX-T have the same goal in mind: to provide features to deploy virtual networks on top of the physical network.

Some of the NSX features include:

  • Distributed logical routing and switching and other features listed in the NSX Edge section above.
  • Detailed monitoring and statistics
  • QoS
  • API-driven automation
  • Software-based overlay
  • Enhanced user interface

The available features depend on the NSX license you purchase.

However, the two tools have many differences, and they are built as two different products. NSX-V cannot work with multiple vCenters and strongly depends on vSphere.

On the other hand, NSX-T is a cross-platform solution that works with multiple vCenter instances and different environments. The latest NSX-T features also include NSX-V to NSX-T Migration and distributed IDS/IPS.

Explore the comparison table below and the sections for feature details to understand the differences between NSX-V and NSX-T better.

VMware NSX-V vs. NSX-T Comparison

Comparison of Features NSX-V NSX-T
Basic Functions NSX-V offers rich features such as deployment reconfiguration, rapid provisioning, and destruction of any on-demand virtual network.

It also provides simplicity for operations in networking and security.

The table showed an NSX-V vs. NSX-T feature comparison summary. Read on to learn more details on the differences between the two solutions.


Both VMware NSX solutions provide dynamic routing capabilities that surpass those that physical routers offer. However, routing differences between NSX-V and NSX-T are numerous.

At its basis, NSX-T is designed with the cloud multi-tenancy in mind. As the provider and tenant router configuration requires isolation, multi-tier routing support was introduced with NSX-T:

  1. Tier 0 logical routing for the provider admin user.
  2. Tier 1 logical routing control for the tenant admin user.

NSX-V ESG (Edge Services Gateway) provides essential DHCP, NAT, VPN services, and DLR (distributed logical routers) to isolate virtual networks. This way, NSX enables communication between VMs with fewer network hops than with traditional routing. However, the lack of support for tiered routing and multi-tenancy leaves NSX-V behind NSX-T.

Logical Switching and Bridging in NSX

NSX-V and NSX-T use overlay networks to mimic the conventional VLANs and provide easier network manipulation through logical switching and bridging. The functionalities resemble those of physical network.

With NSX-V, logical switches are coupled with VXLANs to encapsulate and direct VM traffic to the physical network. By allowing Layer 2 bridging between a physical VLAN and a logical switch, NSX-V expands the pool of features. This includes linking physical with virtual components when complete virtualization is not possible. NSX-V supports multiple Layer 2 bridges.

Similar to its older counterpart, NSX-T provides overlay functionality using logical switching but with a more advanced encapsulation protocol. L2 bridging in NSX-T requires a dedicated bridge node to be created. Whereas in NSX-V, bridging happens on the hypervisor kernel level where a controller VM resides.

Overlay Encapsulation in NSX: VXLAN vs. Geneve

As with other features, NSX-V relies on more traditional VXLAN encapsulation when compared to NSX-T. Virtual Extensible LAN allows for creating around 16 million virtual network segments, surpassing the limitations of VLAN and 4094 possible networks. This number helped large organizations segment virtual network in a more relaxed manner.

Geneve stands for Generic Network Virtualization Encapsulation and compiles the best from other encapsulation protocols, including VXLAN and STT. NSX-T relies on Geneve to deliver virtual network identifier information with high throughput power and lower resource usage. Geneve is a flexible protocol which means it is able to include other metadata as networks evolve.

Security and Micro-Segmentation

Both NSX-V and NSX-T allow organizations to separate data centers into logical network security segments and achieve workload-level application protection. This way, you can granularly define network security policies. You can segment a VDC around vCenter objects and hosts, VM names or features, or by using IP addresses, ports, etc.

NSX solutions use a hypervisor-level distributed firewall to handle all network parameters and security policies. You can use AD users and groups to define rules where AD Domain Controller (ADDC) is deployed.

However, NSX-T offers more security features and a more granular security rule application than NSX-V. Nevertheless, all NSX security functions are built for continuous automation to avoid the confines of manual configuration and maintenance.

Deployment Options

The deployment process is quite similar for both products, yet there are many differences between the NSX-V and NSX-T features. Here are some critical discrepancies in deployment:

  • With NSX-V, there is tight integration with VMware vSphere and vCenter. An NSX Manager instance is deployed as an ESXi VM only and has to be registered with vCenter. On the other hand, you can deploy NSX-T on either a KVM or ESXi VM while vCenter is not required. Additionally, multiple NSX Managers are supported.
  • In contrast to NSX-T where the Manager and Controller are included in the same virtual component, you need to deploy an NSX controller for NSX-V Manager.
  • NSX-T lets you deploy a cluster of three NSX controllers for increased redundancy.
  • It is not possible to use standard virtual switches with NSX-V as it requires you to deploy a vDS. NSX-T uses a new virtual switch technology N-VDS or Open vSwitches (OVS) when for KVM hosts.

NSX Licensing

The good side of NSX-V and NSX-T licensing is that VMware does not divide the products. If you already have an NSX-V license, you can start using NSX-T whenever you are ready.

The solution is now called VMware NSX Data Center and has the following editions:

  • Standard
  • Professional
  • Advanced
  • Enterprise Plus

With the increase of remote work, VMware also introduced Remote Office Branch Office edition (ROBO).

Choosing Between NSX-V and NSX-T

The major differences in NSX-V vs. NSX-T systems are evident in the table and the sections above. The first is closely associated with the VMware vSphere ecosystem. The other is unrestricted, not tied to a specific platform or hypervisor.

NSX-V vs. NSX-T scale

To determine which NSX option is better for you, take into consideration the use case for each product.

VMware NSX-V and NSX-T have many distinct features, a totally different code base, and cater to different use cases.

Choosing NSX-V

This product is still a valid choice in the following NSX-V use cases:

  • For on-premises workloads without multiple hypervisors.
  • When your data center uses only VMware vSphere.
  • If your IT environment does use than one vCenter instance.
  • If you do not need a highly redundant on-prem infrastructure with multiple NSX Managers.

Choosing NSX-T

VMware NSX-T with many updates and enhancements overtook NSX-V in almost all cases. NSX-T use cases cover the following:

Security Automation Multi-Cloud Virtual Networking Cloud-native app support
Critical application lockdown Fast deployment of full-stack applications. Support for multi-cloud environments Advanced networking and security features for microservices and containerized applications.
Protection of individual workloads Integration with other automation tools. Cross-site network virtualization for rapid-growing businesses. Per-container security policies
Logical DMZ creation Automatic security policy and access rule application when migrating VMs to other subnets. Single to multi data center extension. Native container-to-container Layer 3 networking
Micro-segmentation for granular security policies Consistent and error-free network configuration and management

Conclusion: VMware NSX Provides a Strong Network Virtualization Platform

NSX-T and NSX-V both solve many virtualization issues, offer full feature sets, and provide an agile and secure environment. NSX-V is the original network and security virtualization platform that was the king of the hill for many years. As businesses are grow, there are fewer companies with single on-premises data centers and more cloud-based IT environments.

This is where NSX-T steps in. Its constant expansion of features and multi-site and cloud support is making NSX-V obsolete. Now, VMware labels their NSX solution as VMware NSX Data Center. With the necessary tools for moving and handling your data, regardless of the underlying physical network, NSX helps you adjust to the constant changes in applications.

The choice you make depends on which NSX features meet your business needs. If you are in the market for NSX, contact us for more information and NSX-V to NSX-T migration. Keep reading our blog to learn more about different tools and to find the best solutions for your networking requirements.

Nsx t что это

It includes the references to the following resources, such as:

  1. NSX-T Data Center Migration Coordinator Guide
  2. NSX-T Data Center Reference Design Guide
  3. NSX-T Data Center Blog
  4. NSX-T Data Center Training and Demo videos
  5. NSX-T Data Center Load Balancing Encyclopedia
  6. NSX-T Data Center Multi-Location Design Guide

Deploy NSX-T

Implementing NSX-T Data Center in vSphere steps

  1. Deploy an NSX Manager node from an OVF template
  2. Access the NSX UI
  3. Register vCenter Server with NSX Manager
  4. Deploy additional NSX Manager nodes to form an NSX management cluster
  5. Preconfigure transport nodes, including transport zones, IP pools, and unlink profiles
  6. Prepare hypervisor host transport nodes
  7. Deploy the NSX Edge nodes, and deploy edge cluster

Transport zone name can not have space, and should be using "-". Transport zone name could not be changed once configured.

NSX-T 3.1 Administration Guide

NSX-T 3.1 Administration Guide

NSX-T 3.1 Installation Guide

NSX-T 3.1 Installation Guide

NSX-T 3.1 Upgrade Guide

NSX-T 3.1 Upgrade Guide

NSX-T 3.0 Operational Guide

Key Concepts

NSX-T Data Center works by implementing three separate but integrated planes: management, control, and data. These planes are implemented as a set of processes, modules, and agents residing on two types of nodes

NSX Manager supports a cluster with three node, which merges policy manager, management, and central control services on a cluster of nodes. NSX Manager clustering provides high availability of the user interface and API. The convergence of management and control plane nodes reduces the number of virtual appliances that must be deployed and managed by the NSX-T Data Center administrator.

Temopary NSX Manager Nodes Usage

Key Concepts

A compute manager is an application that manages resources such as hosts and VMs. One example is vCenter Server.

Management Plane

Provides single API entry point to the system, persists user configuration, handles user queries, and performs operational tasks on all of the management, control, and data plane nodes in the system. Management plane is also responsible for querying, modifying, and persisting user configuration.

Control Plane

Computes runtime state based on configuration from the management plane. Control plane disseminates topology information reported by the data plane elements, and pushes stateless configuration to forwarding engines.

Data Plane

Performs stateless forwarding or transformation of packets based on tables populated by the control plane. Data plane reports topology information to the control plane and maintains packet level statistics.

NSX Managed Virtual Distributed Switch or KVM Open vSwitch

The NSX managed virtual distributed switch (N-VDS, previously known as hostswitch) or OVS is used for shared NSX Edge and compute cluster. N-VDS is required for overlay traffic configuration.
An N-VDS has two modes: standard datapath enhanced datapath.

NSX Manager

Node that hosts the API services, the management plane, and the agent services. NSX Manager is an appliance included in the NSX-T Data Center installation package. You can deploy the appliance in the role of NSX Manager or nsx-cloud-service-manager.

Open vSwitch (OVS)

Open source software switch that acts as a virtual switch within XenServer, Xen, KVM, and other Linux-based hypervisors.

Overlay Logical Network

Logical network implemented using Layer 2-in-Layer 3 tunneling such that the topology seen by VMs is decoupled from that of the physical network.

Tier-0 Logical Router

A Tier-0 Logical Router provides north-south connectivity and connects to the physical routers. It can be configured as an active-active or active-standby cluster. The Tier-0 gateway runs BGP and peers with physical routers. In active-standby mode the gateway can also provide stateful services.

Tier-1 Logical Router

A Tier-1 logical router connects to one Tier-0 logical router for northbound connectivity to the subnetworks attached to it. It connects to one or more overlay networks for southbound connectivity to its subnetworks. A Tier-1 logical router can be configured as an active-standby cluster.

Transport Zone

Collection of transport nodes that defines the maximum span for logical switches. A transport zone represents a set of similarly provisioned hypervisors and the logical switches that connect VMs on those hypervisors. It also has been registered with the NSX-T Data Center management plane and has NSX-T Data Center modules installed. For a hypervisor host or NSX Edge to be part of the NSX-T Data Center overlay, it must be added to the NSX-T Data Center transport zone.

Transport Node

A fabric node is prepared as a transport node so that it becomes capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking. For a KVM host, you can preconfigure the N-VDS or you can have NSX Manager perform the configuration. For an ESXi host, NSX Manager always configures the N-VDS.

Uplink Profile

Defines policies for the links from hypervisor hosts to NSX-T Data Center logical switches or from NSX Edge nodes to top-of-rack switches. The settings defined by uplink profiles might include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting. The transport VLAN set in the uplink profile tags overlay traffic only and the VLAN ID is used by the TEP endpoint.

Virtual Tunnel Endpoint

Each hypervisor has a Virtual Tunnel Endpoint (VTEP) responsible for encapsulating the VM traffic inside a VLAN header and routing the packet to a destination VTEP for further processing. Traffic can be routed to another VTEP on a different host or the NSX Edge gateway to access the physical network.

Install NSX Manager and Available Appliances

You can use the vSphere Client to deploy NSX Manager virtual appliances. The same OVF file can used to deploy three different types of appliances: NSX Manager, NSX Cloud Service Manager for NSX Cloud, and Global Manager for NSX Federation.

Cloud Service Manager (CSM) is a virtual appliance that uses NSX-T Data Center components and integrates them with your public cloud.

Important Logs

NSX-T CLI References

Reference Sites

Good NSX-T installation guide

Tier-0 Gateway and BGP

Configure logical routinng

Settingup Postman API

Deploy NSX-T Using PowerShell

Deploy NSX-T using powerShell

NSX-T Concepts


Segment, also known as logical switch, provides switching functionality in an NSX-T Data Center virtual environment. Segments are similar to VLANs. Each segment has a virtual network identifier (VNI), similar to a VLAN ID. VNI scale well beyond the limits of VLAN IDs. A segment is a representation of a layer 2 broadcast domain across transport nodes. Segments segregate network from each other. The VMs connected to a segment can communicate with each other through tunnels between hosts. A segment is created either in an overlay or VLAN based transport zone.

Segment profile include layer 2 network configuration details for logical switches adn logical ports, supports QoS, port mirroring, IP Discovery, SpoofGuard, segment security, MAC management, and Network I/O control.

The NSX VNI range is 5000 thrugh 1677216, and must use MTU of 1600 to account for the encapsulation header.

Segment Command Lines

Tier-0 and Tier-1 Gateway

Distributed Router (DR) and Servie Router (SR)

A DR is always created when creating a gateway. The DR component is distributed among all hypervisors and provides basic packet forwarding. The SR component is only located in the NSX Edge nodes and provides service. A SR is automatically created on the edge node when you configure the gateway with an edge cluster.


Routerlink is a type of interface that connects Tier-0 and Tier-1 gateways. The interface is created automatically when Tier-0 and Tier-1 gateways are connected. It uses a subnet assigned from the IPv4 subnet.

Intratier Transit Link

The intratier transit link connection is automatically created when a service router (SR) is created. It is an internal link between the SR and DR on a gateway. It has an IP address in subnet.

NSX Edge

When you first deploy an NSX Edge, you can think of it as an empty container. The NSX Edge does not do anything until you create logical routers. The NSX Edge provides the compute backing for tier-0 and tier-1 logical routers. Each logical router contains a services router (SR) and a distributed router (DR). When we say that a router is distributed, we mean that it is replicated on all transport nodes that belong to the same transport zone.

An NSX Edge can belong to one overlay transport zone and multiple VLAN transport zones. An NSX Edge belongs to at least one VLAN transport zone to provide the uplink access.

Virtual-Appliance or VM NSX Edge Networking

When you install NSX Edge as a virtual appliance or VM, internal interfaces are created, called fp-ethX, where X is 0, 1, 2, and 3. These interfaces are allocated for uplinks to a top-of-rack (ToR) switches and for NSX-T Data Center overlay tunneling.

When you create the NSX Edge transport node, you can select fp-ethX interfaces to associate with the uplinks and the overlay tunnel. You can decide how to use the fp-ethX interfaces.

On the vSphere distributed switch or vSphere Standard switch, you must allocate at least two vmnics to the NSX Edge: One for NSX Edge management and one for uplinks and tunnels.

Edge Cluster

An NSX edge cluster ensures that at least one NSX edge node is always available. The following guideline are required:

  1. Maximum 10 edge nodes in a cluster
  2. An edge transport node can be added to only one edge cluster.
  3. Maximum of 160 clusters can be configured. One cluster can provide 8 way ECMP path northbound and another cluster can provide centralised services.

NSX edge VM sizing options

NSX edge node can be deployed as VM on ESXi host or bare-metal node.

Edge Node VM Interfaces

The first interface must be defined for management access (eth0) by using one vNIC.

The other interfaces must be assigned to the datapath process.

Edge Node Installation

Install edge node on bare-metal using the ISO file, or from NSX Manager, or from vCenter. By default, the root login password is vmware, and the admin login password is default.

Configure the NSX Edge node with a DNS Server

Join NSX Edge Node to Management Plan

Install the NSX edge node by any method other than NSX UI does not automatically join the NSX edge node to the management plane.

Notes from the field

Edge Cluster

Create an edge cluster for the following reasons:

  1. Having a multinode cluster of NSX edge nodes ensures that at least one NSX edge node is always available.
  2. We must associate an edge node with an NSX edge node cluster if we want to create stateful services, such as NAT, load balancer, etc.
Edge Commands Lines

NSX-T Troubleshooting

The NSX-T Data Center kernel modules are packaged in VIB files and downloaded to transport nodes. The kernel modules provide services such as distributed routing, distributed firewall, and so on.

The functions of the VIBs are defined as follows:

Verify KVM Transport Node by CLI

NSX-T Command Line

ESXi Host Command Lines

KVM Host Transport Node Command Lines

You can prepare ESXi, KVM hosts and physical servers as NSX-T Data Center transport nodes. After adding an ESXi host to the NSX-T Data Center fabric, the following VIBs get installed on the host.

Prepare Standalone Hosts as Transport Nodes

A transport node is a node that participates in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking.

The following prerequisites need to be met

Verify the Transport Node Status

Make sure that the transport node creation process is working correctly.

After creating a host transport node, the N-VDS gets installed on the host.

Tier-0 and Tier-1 Gateway

Before configuring the Tier-0 and Tier-1 gateways, at least one NSX edge node is installed, and an NSX edge cluster is configured.

You must manually connect the Tier-0 and Tier-1 gateways. The management plane cannot determine which Tier-1 instance should connect to which Tier-0 instance.

VMs on various subnets or segments attached to the Tier-1 gateway can communicate with each other.

Each Tier-0 gateway can have multiple uplink connections if required or configured. After Tier-0 gateway is created, we could setup the interfaces.

Gateway Command Lines

Logical Switches Command Lines

Configure Routing

After creating the Tier-0 gateway, static or dynamic routing to the remote networks can be setup by editing the Tier-0 gateway.

To propagate tenant networks from Tier-1 gateway to Tier-0 gateway, route advertisement need to enable on the Tier-1 gateway.

Dynamic routing protocol categories

  1. Interior gateway protool (IGPs)
  2. Exterior gateway protocol (EGPs)

NSX implements iBGP and external BGP (eBGP)

External BGP is used to establish neighbour relationship between Tier-0 and upstream physical gateways with different AS network. The Tier-0 gateway BGP topology should be configured with redundancy and symmetry between the Tier-0 gateways and the external peers. BGP is enabled by default on Tier-0 gateways.

Logical Routing Command Lines

NSX-T Virtual IP Address

If NSX-T is integrated with VMware Identity Manager (vIDM) and VMwaer Life Cycle Manager, when configure NSX-T VIP, the following process is required:


Firewall Security Terminology
Identity Firewall

With Identity Firewall (IDFW) features an NSX administrator can create Active Directory userbased Distributed Firewall (DFW) rules. IDFW can be used for Virtual Desktops (VDI) or Remote desktop sessions (RDSH support), enabling simultaneous log ins by multiple users, user application access based on requirements, and the ability to maintain independent user environments

Troubleshooting Distributed Firewall

Distributed firewall policies have the following categories:

NSX-T distributed firewall rule processing is carried out:

VMware Internetworking Service Insertion Platform (VSIP) module is the main part of the distributed firewall kernel module that receives the firewall rules and downloads them on the VM’s vNIC.

Verify firewal from API

Gateway Firewall Troubleshooting

Gateway firewall policies have the following categories:

Gateway firewall rules are programmed into rule classifier.

NSX-T Services

Load Balancer

NSX-T 3.0 supports layer 4 and layer 7 load balancer. Load balancer runs on Tier-1 gateway edge cluster in an active-standby mode.

It is commonly deployed in one-arm or inlined deployment.

  1. Layer 4 — L4 load balancer is connection-based, it supports TCP and UDP protocols
  2. Layer 7 — L7 load balancer is content-based, it supports HTTP and HTTPS, also support URL manipulation through user defined rules.
Distributed Load Balancer

A Distributed Load Balancer (DLB) configured in NSX-T Data Center can help you effectively load balance East-West traffic and scale traffic because it runs on each ESXi host.

In traditional networks, a central load balancer deployed on an NSX Edge node is configured to distribute traffic load managed by virtual servers that are configured on the load balancer.

If you are using a central balancer, increasing the number of virtual servers in the load balancer pool might not always meet scale or performance criteria for a multi-tier distributed application. A distributed load balancer is realized on each hypervisor where load balancing workloads, such as clients and servers are deployed, ensuring traffic is load balanced on each hypervisor in a distribued way.

A distributed load balancer can be configured on the NSX-T network along with a central load balancer.

Gateway Firewall

VPN Service


IPSec VPN secures traffic flowing between two networks connected over a public network through IPSec gateways called endpoints.

IPSec VPN uses the IKE protocol to negotiate security paramters. The default UDP port is set to 500. If NAT is detected in the gatewway, the port is set to UDP 4500.

NSX Edge supports a policy-based or a route-based IPSec VPN.

The Internet Key Exchange (IKE) profiles provide information about the algorithms that are used to authenticate, encrypt, and establish a shared secret between network sites when you establish an IKE tunnel.

Policy Based IPSec VPN

Policy-based IPSec VPN requires a VPN policy to be applied to packets to determined which traffic is to be protected by IPSec before being passed through the VPN tunnle.

Route Based IPSec VPN

Route-based IPSec VPN provides tunneling on traffic based on the static routes or routes learned dynamically over a special interface called virtual tunnel interface (VTI) using, for example, BGP as the protocol. IPSec secures all the traffic flowing through the VTI.

Route-based IPSec VPN is similar to Generic Routing Encapsulation (GRE) over IPSec, with the exception that no additional encapsulation is added to the packet before applying IPSec processing.

Layer 2 VPN

With Layer 2 VPN (L2 VPN), you can extend Layer 2 networks (VNIs or VLANs) across multiple sites on the same broadcast domain. This connection is secured with a route-based IPSec tunnel between the L2 VPN server and the L2 VPN client. The extended network is a single subnet with a single broadcast domain, which means the VMs remain on the same subnet when they are moved between sites.

L2 VPN services are supported on both Tier-0 and Tier-1 gateways. Only one L2 VPN service (either client or server) can be configured for either Tier-0 or Tier-1 gateway.

Each L2 VPN session has one Generic Routing Encapsulation (GRE) tunnel. Tunnel redundancy is not supported. An L2 VPN session can extend up to 4094 L2 segments.

VLAN-based and VNI-based segments can be extended using L2 VPN service on an NSX Edge node that is managed in an NSX-T Data Center environment. You can extend L2 networks from VLAN to VNI, VLAN to VLAN, and VNI to VNI. Segments can be connected to either Tier-0 or Tier-1 gateways and use L2 VPN services.

Autonomous Edge as an L2 VPN Client

You can use L2 VPN to extend your Layer 2 networks to a site that is not managed by NSX-T Data Center. An autonomous NSX Edge can be deployed on the site, as an L2 VPN client. The autonomous NSX Edge is simple to deploy, easily programmable, and provides high-performance VPN. The autonomous NSX Edge is deployed using an OVF file on a host that is not managed by NSX-T Data Center. You can also enable high availability (HA) for VPN redundancy by deploying primary and secondary autonomous Edge L2 VPN clients.

Overly network over NSX-T VPN

Site-to-Site VPN Between NSX-T Tier-1 And AWS VPC

Hub and Spoke Layer 2 VPNs between multiple NSX-T enabled sites

Add an IPSec VPN Service

NSX-T provides support for two types VPN services:

  1. IPSec VPN provides secure transport service between locations connected by public or non-secured IP netowrks.

It supports two types of VPN

It supports the following connections:

  1. Layer 2 VPN

It enables company to extend layer 2 network across two different locations, data centres securely.

IPSec Service Verification
IPSec Logging

IPSec vPN syslog locates at /var/log/syslog

Ethernet VPN (EVPN)

EVPN (Ethernet VPN) is a standards-based BGP control plane that provides the ability to extend Layer 2 and Layer 3 connectivity between different data centers.

Configure an EVPN Tenant

If you configure EVPN Route Server mode, you must configure an EVPN tenant. To configure an EVPN tenant, you must specify one or more VLAN-VNI mappings. The VLANs identify the tenant. Each VNI identifies a VRF gateway.

Forwarding Policies

This feature pertains to NSX Cloud.

Forwarding Policies or Policy-Based Routing (PBR) rules define how NSX-T handles traffic from an NSX-managed VM. This traffic can be steered to NSX-T overlay or it can be routed through the cloud provider's (underlay) network.

Three default forwarding policies are set up automatically after you either deploy a PCG on a Transit VPC/VNet or link a Compute VPC/VNet to the Transit.

Network Settings

vSphere Distributed Switch for NSX-T

N-VDS support mode: Standard, Enhanced Datapath


You can configure multicast on a tier-0 gateway and optionally on a tier-1 gateway for an IPv4 network to send the same multicast data to a group of recipients. In a multicast environment, any host, regardless of whether it is a member of a group, can send to a group. However, only the members of a group will receive packets sent to that group.

Distributed IDS/IPS

Distributed Intrusion Detection and Prevention Service (IDS/IPS) monitors network traffic on the host for suspicious activity.

Signatures can be enabled based on severity. A higher severity score indicates an increased risk associated with the intrusion event. Severity is determined based on the following:

a. Severity specified in the signature itself

b. CVSS (Common Vulnerability Scoring System) score specified in the signature

c. Type-rating associated with the classification type IDS detects intrusion attempts based on already known malicious instruction sequences. The detected patterns in the IDS are known as signatures. You can set a specific signature alert/drop/reject actions globally, or by profile

Endpoint Protection

NSX-T Data Center allows you to insert third-party partner services as a separate service VM that provides Endpoint Protection services . A partner Service VM processes file, process, and registry events from the guest VM based on the endpoint protection policy rules applied by the NSX-T Data Center administrator.

NSX-T Data Center Multisite

NSX-T Data Center supports multisite deployments where you can manage all the sites from one NSX Manager cluster.

Two types of multisite deployments are supported:

NSX Federation

With NSX Federation, you can manage multiple NSX-T Data Center environments with a single pane of glass view, create gateways and segments that span one or more locations, and configure and enforce firewall rules consistently across locations.

Once you have installed the Global Manager and have added locations, you can configure networking and security from Global Manager.

Networking in NSX Federation

Tier-0 gateways, tier-1 gateways, and segments can span one or more locations in the NSX Federation environment.

Packet Capture Troubleshooting

Use the packet capture tools in NSX-T transport node and ESXi hypervisor for troubleshooting.

Capturing a network trace in ESXi using Tech Support Mode or ESXi Shell (1031186)

Detail walk throught with tcpdump-uw and pktcap-uw

How to Changne NSX-T Virutal IP Address

  1. Access NSX node (not VIP) IP address url and login locally
  1. Navigate to System -> Appliance
  2. In the Virtual IP field, select Edit virtual IP to update VIP

NSX Manager Cluster Integration with VMware Identity Manager

  1. Add new remote app access in vIDM for NSX access
  1. Configure NSX manager with vIDM

Manaually join the new NSM manager node to NSX manager cluster

Authentication and Authorization

In NSX-T Data Center 3.1, you can log in to NSX Manager using a local user account, a user account managed by VMware Identity Manager (vIDM), or a user account managed by a directory service such as Active Directory over LDAP or OpenLDAP. You can also assign roles to user accounts managed by vIDM or a directory service to implement role-based access control.

Local user passwords on NSX appliances are secured using the default Linux/PAM libraries which store the hashed and salted representation in /etc/shadow.

How to Reset the Forgotten Password of an Appliance

The following procedure applies to NSX Manager, NSX Edge, Cloud Service Manager, and NSX Intelligence appliances.

Note: When you reboot an appliance, the GRUB boot menu does not appear by default. The following procedure requires that you have configured GRUB to display the GRUB boot menu.

How to Integrate vIDM with NSX-T

Before you configure the integration of vIDM with NSX-T, you must get the certificate thumbprint from the vIDM host.

You must use OpenSSL version 1.x or higher for the thumbprint. In the vIDM host, the command openssl runs an older OpenSSL version and therefore you must use the command openssl1 in the vIDM host. This command is only available from the vIDM host.

In a server that is not the vIDM host, you can use the openssl command that is running OpenSSL version 1.x or higher.

You can integrate NSX-T Data Center with VMware Identity Manager (vIDM), which provides identity management services. The vIDM deployment can be a standalone vIDM host or a vIDM cluster.

NSX-T Certificates

There are three categories of self-signed certificates in NSX-T Data Center.

Platform Certificates

After installing NSX-T Data Center, navigate to System > Certificates to view the platform certificates created by the system. By default these are self-signed X.509 RSA 2048/SHA256 certificates for internal communication within NSX-T Data Center and for external authentication when NSX Manager is accessed using APIs or the UI.

NSX Service Certificates

NSX service certificates are used for services such as load balancer and VPN.

NSX service certificates cannot be self signed. You must import them.

Principal Identity (PI) Certificates

PI certificates can be for services or for platform.

PI for Cloud Management Platforms (CMP), such as Openstack, uses X.509 certificates that are uploaded when onboarding a CMP as a client.

Certificates for NSX Federation

The system creates certificates required for communication between NSX Federation appliances as well as for external communication.

By default, the Global Manager uses self-signed certificates for communicating with internal components and registered Local Managers, as well as for authentication for NSX Manager UI or APIs.

You can view the external (UI/API) and inter-site certificates in NSX Manager. The internal certificates are not viewable or editable.

Backing Up and Restoring NSX Manager or Global Manager

While the appliance is inoperable, the data plane is not affected, but you cannot make configuration changes.

Create the SFTP server, verify that the SFTP server is ready for use and is running SSH and SFTP, using the following commands:

Ensure that the directory path exists where you want to store your backups. You cannot use the root directory (/).

If you have multiple NSX-T Data Center deployments, you must use a different directory for storing the backup of each deployment.

You can take backups using either the IP address or the FQDN of the NSX Manager or Global Manager appliance:

a) If you are using the IP address for backup and restore, do not publish the appliance's FQDN.

b) If you are using FQDN for backup and restore, you must configure and publish the FQDN before starting the backup. Backup and restore only support lowercase FQDN. Use this API to publish the NSX Manager or Global Manager FQDN.

Backup Procedure

Restore a Backup

Restoring a backup restores the state of the network at the time of the backup. In addition, the configurations maintained by NSX Manager or Global Manager appliances are also restored. For NSX Manager, any changes, such as adding or deleting nodes, that were made to the fabric since the backup was taken, are reconciled. Note DNS entries (name servers and search domains) are not retained when you restore from a backup.

Restore Procedure

A progress bar displays the status of the restore operation noting the step the restore process is on. During the restore process, services on the manager appliance get restarted and the control plane becomes unavailable until restore completes. After the restore operation is finished, the Restore Complete screen shows the result of the restore, the timestamp of the backup file, and the start and end time of the restore operation. Any segments created after the backup was taken are not restored.

You can also determine if there was a cluster restore or node restore failure by selecting the og files. Run get log-file syslog to view the system log file and search for the strings Cluster restore failed and Node restore failed.

To restart the manager, run the restart service manager command.

To reboot the manager, run the reboot command.

Certificate Management after Restore

After restoring your NSX Manager appliances, certificates in the system get into an inconsistent state and you must update all self-signed or CA-signed certificates.

How to Change the IP Address of an NSX Manager

You can change the IP address of an NSX Manager in an NSX Manager cluster. This section describes several approaches. For example, if you have a cluster consisting of Manager A, Manager B, and Manager C, you can change the IP address of one or more of the managers in the following ways:

How to Resize an NSX Manager Node

Replacing an NSX Edge Transport Node in an NSX Edge Cluster

You can replace an NSX Edge transport node in an NSX Edge cluster using the NSX Manager UI or the API.

Log Messages and Error Codes

NSX-T Data Center components write to log files in the directory /var/log.

Viewing Logs

On NSX-T appliances syslog messages are in /var/log/syslog. On KVM hosts, syslog messages re in /var/log/vmware/nsx-syslog.

On NSX-T appliances, you can run the following NSX-T CLI command to view the logs:

The log files are:

On hypervisors, you can use Linux commands such as tac, tail, grep, and more to view the logs.

Configure Remote Logging

Log Message IDs

In a log message, the message ID field identifies the type of message. You can use the messageid parameter in the set logging-server command to filter which log messages are sent to a logging server.

Troubleshooting Syslog Issues

If logs are not received by the remote log server, perform the following steps.

Find the SSH Fingerprint of a Remote Server

Some tasks that involve communication with a remote server require that you provide the SSH fingerprint for the remote server. The SSH fingerprint is derived from a host key on the remote server.

To connect using SSH, the NSX Manager and the remote server must have a host key type in common.

Using NSX Cloud

NSX Cloud enables you to manage and secure your public cloud inventory using NSX-T Data Center.

VPCs or VNets
CSM Icons

CSM displays the state and health of your public cloud constructs using descriptive icons.

Join CSM with NSX Manager

You must connect the CSM appliance with NSX Manager to allow these components to communicate with each other.

NSX Maintenance Mode

If you want to avoid vMotion of VMs to a transport node that is not functional, place that transport node in NSX Maintenance Mode.

To put a transport node in NSX Maintenance Mode, select the node, click Actions → NSX Maintenance Mode.

When you put a host in NSX Maintenance Mode, the transport node cannot participate in networking. Also, VMs running on other transport nodes that have N-VDS or vSphere Distributed Switch as the host switch cannot be vMotioned to this transport node. In addition, logical network cannot be configured on ESXi or KVM hosts.

Scenarios to put the transport node in NSX Maintenance Mode:

  1. A transport node is not functional.
  2. If a host has hardware or software issues that are unrelated to NSX-T, but you want to retain the node and its configurations in NSX-T, place the host in NSX Maintenance Mode.
  3. A transport node is automatically put in NSX Maintenance Mode when an upgrade on that transport node fails.

Note: Any transport node put in the NSX Maintenance Mode is not upgraded.

How to install/configure new NSX-T transport node
How to migrate vmkernel adapters to N-VDS method 2

Beside using configure NSX/Install NSX and specify the network mappings to migrate vmk0, vmkx to be migrated from standard switch or distributed switch to N-VDS. It can be migrated from Actions -> Migrate ESX VMkernel and Physical Adapters when selecting the ESX transport node.

How to manually remvoing faulty NSX-T Transport Nodes

This should be your last resort option if the regular procedures to remove ESXi based Transport Nodes from the NSX Manager do not work anymore.

There are 3 methods depends on the transport node is reachable from NSX manager

  1. Using NSX Manager
  2. Using NSXCLI
  3. Using ESXi native tools

If NSX Intelligence is also deployed on the host, uninstallation of NSX-T Data Center will fail because all transport nodes become part of a default network security group. To successfully uninstall NSX-T Data Center, you also need to select the Force Delete option before proceeding with uninstallation.

Remove NSX from the transport node Using NSX Manager
Transport Node Profiles

When a “Transport Node Profile” is attached to the cluster the faulty node resides in, the “Remove NSX” option is not available for that specific node, but only for the whole cluster. In this case use the “Detach Transport Node Profile” option in the “Actions” menu. When detaching the “Transport Node Profile” from a cluster it has no impact on data plane traffic within the cluster.

Remove NSX from transport node using NSXCLI

If all the NSX-T modules are still present on the node, this method should be your preferred one over using the ESXi native tools.

Remove NSX from transport node using ESXCLI

Only when removing from NSX GUI and NSXCLI is not possbile due to whatever reason, a manual cleanup could be needed.

The config that is normally removed by NSX Manager or NSXCLI (besides the VIBs) is:

How to do manual cleanup

Now remove the VIBs in the correct order from the node. The last VIB to be removed, “nsx-esx-datapath” cannot be removed if the N-VDS is still present on the node. Only for that VIB the “—no-live-install” switch must be added. Run the command for every VIB to be removed.

Modernize Your Network with VMware NSX-T

The way applications are architected and deployed has changed. Modern applications run on multiple clouds, making use of heterogeneous compute platforms such as containers, virtual machines and bare metal. With frequent releases and rapid application deployment, the network must become just as agile to support applications on any compute.

With an API-driven architecture, built-in distributed security, and streamlined operations, VMware NSX-T is uniquely designed to address this challenging environment and bring the public cloud experience to your private cloud.

Что такое NSX-T?

В этом цикле статей мы начнем знакомиться с чудесным продуктом NSX-T. Этого момент я ждал долго, а кнопка NSX в моем блоге не имела никакого вывода до сего момента. Итак, что же это за продукт и для чего он нужен? NSX это решение для виртуализации сети в вашем ЦОД. Обычно мы имеем пачку хостов ESXi, которые умеют обращаться с сетевым трафиком на уровне L2, то есть тегировать кадры и принимать из физического мира VLAN-ы и доставлять их до виртуальных машин. Да нам доступны функции базового Firewall на VDS, а ESXi понимают IP адреса виртуальных машин и видят их в трафике, но это и в принципе все, основная сетевая «движуха» происходит на наших роутерах, фаерволлах, всевозможных навороченных коммутаторах и прочих вендорных железках, которые конечно функциональны и прекрасно справляются со своими обязанностями, но не лишены недостатков любого физического оборудования. Например, любой высоко функциональный роутер с функциями NGFW ограничен своим железом, пропускной способностью портов, и небезграничными функциями его OS. Со временем его железо станет не таким производительным по сравнению с новыми моделями его конкурентов, его OS не сможет проделывать какие-нибудь новомодные финты с сетевым трафиком, да и на обслуживание его периодически нужно будет останавливать. В общем налицо все те же ограничения, присущие любой физической машине, которые исключает виртуализация. Ну и стоит помнить, что что бы трафику пройти обработку на железном оборудовании, ему (трафику) нужно на это оборудование прибыть, загружая каналы связи, даже если ему (трафику) после обработки (маршрутизации или ACL) нужно будет потом лететь на тот же самый хост ESXi, к примеру, на машину, расположенную в другом VLAN, но на том же самом гипервизоре.

Теперь давайте взглянем на мир виртуальных сетей. Где базовые операции маршрутизации и фаерволла трафик проходит, не выходя в физический мир. На каждом хосте в инфраструктуре работает свой фаерволл и маршрутизатор. И трафик наших машин не бежит по каналам связи до ядра, чтобы маршрутизироваться, а делает это на хосте и затем летит физически туда, куда ему нужно, не делая крюков в инфраструктуре. Целевая идея виртуальных сетей NSX – предоставлять сетевые функции там, где это нужно. Так повелось, что практически везде нас поджидает архитектура x86, будь то вычислительные мощности, системы хранения данных или сетевое оборудование. Конечно не прям везде и всюду, но пока что она явно лидирует. Да сейчас происходит становление ARM и появившийся недавно во flings ESXi для ARM прямое тому подтверждение, но пока что этот процесс только запущен, а мы возвращаемся к x86 и к тому, что VMware чудесным образом использует ее в пределах одного хоста для предоставление вычислительных ресурсов (vSphere), а также распределенного хранилища (vSAN) и сети (NSX). И эта концепция определяет, что такое Software Defined Data Center.

NSX-T это не только средство виртуализации сетей у вас в ЦОДе. Помимо «наземного» использования этого продукта, есть возможность «дотянуть» ваши сети до публичных облаков, среди которых — VMware Cloud on AWS или Microsoft Azure Cloud. В облаке можно также развернуть NSX, только это будет «облачный» NSX Cloud. Его архитектура и компоненты будут слегка отличаться от «наземного». Но он обеспечит тот же уровень микросегментации и безопасности, что и обычный. Главная причина использования Cloud NSX – это что бы приложения и службы в облаке были оснащены теми же инструментами защиты и подчинялись тем же политикам, что «наземные».

Портфолио продуктов NSX

Сам NSX-T является большой частью семейства Virtual Cloud Network продуктов, которое включает помимо него другие решения в области виртуализации сети (SD-WAN так далее). Но так как остальные продукты вне данного цикла, мы фокусируемся на NSX и посмотрим из каких решений он состоит.

Как видим на картинке, сам NSX не ограничивается только NSX-T Data Сenter. Присутствуют и другие решения, которые могут быть как частью NSX-T Data Center, так и являться отдельными объектами, которые интегрируются между собой, или полностью независимы. Напомню, что цикл по NSX-T Data Center, а остальные решения мы просто рассмотрим вкратце:

  • NSX Data Center – главный герой нашего цикла.
  • NSX Cloud – про него упоминалось выше, разворачивается в облаках и дает возможность объединить облачные и наземные сети в одну инфраструктуру и подчинить одним политикам.
  • NSX Intelligence — это решение для распределенной аналитики, которое обеспечивает прозрачность и динамическое применение политик безопасности для сред NSX-T Data Center. Да-да маркетинговые слова, а если вкратце, то это аплайнс, который можно развернуть и интегрировать с NSX-T. Используется для аналитики и «подсказок», где что улучшить в инфраструктуре виртуальных сетей.
  • Распределенный IDS / IPS NSX — это усовершенствованный механизм обнаружения угроз, специально созданный для обнаружения угроз в сетевом трафике в инфраструктуре NSX.
  • NSX Advanced Load Balancer – отдельный продукт. Является очень функциональным сетевым балансировщиком нагрузки с кучей навороченных функций, например, динамическое увеличение вычислительных ресурсов балансировщика при увеличении подключений.
  • NSX Service Mesh обеспечивает обнаружение и безопасную связь микросервисов в гетерогенной инфраструктуре.
  • VMware HCX обеспечивает гибкость и мобильность приложений и сетей.

В дополнение, вся эта «шайка» может интегрироваться с инструментами мониторинга и автоматизации из семейства vRealize:

  • vRealize Network Insight Cloud (SaaS) и vRealize Network Insight
  • vRealize Automation
  • vRealize Operation Manager

Теперь перейдем от лирического вступления к тому из чего состоит NSX и как он работает. NSX это достаточно сложный продукт с множеством компонентов и технических возможностей. И для того что бы освоить его нужно быть знакомым с vSphere и вообще с концепцией и принципами виртуализации, а также что немаловажно — неплохо знать сеть и сетевые протоколы. Некоторое время назад NSX делился на два независимых продукта – NSX-V и NSX-T. NSX-V был заточен под vSphere и исключал использользование других гипервизоров и платформ, управлялся из vСenter, но сейчас его жизнь прекращается и в скором времени он будет упразднен. NSX-T это мультиплатформенный NSX, имеющий независимую консоль управления и не требующий vCenter. NSX-T может охватывать как vSphere так и KVM в одну фабрику. Если сравнивать эти два продукта (V и T), то NSX-T значительнее сложен и функционален. Продукты отличаются технически во многих аспектах, но принципы работы у них одинаковые. В данном цикле мы будем говорить о NSX-T.

Архитектура NSX-T Data Center

Архитектура всего NSX-T достаточно сложна и многопланова. С моей точки зрения ее можно разбить как минимум на логическую и физическую, к тому же стоит прибавить еще и архитектуру управления. Начнем с последней.

Архитектура управления NSX-T

Управляется вся эта махина с помощью:

  1. Management Plane
  2. Control Plane
  3. Data Plane

Management Plane – является точкой входа для запросов пользователей, принимает конфигурацию, заданную пользователем, API запросы. На уровне Management Plane пользователь или другая система через REST API взаимодействует с NSX-T и «говорит» что ему нужно, далее Management Plane комплектует полученное в более технический «вид» и передает на Control Plane. Располагается на NSX Manager-е. Отказоустойчивость ему обеспечивает NSX Manager Cluster.

Control Plane – получает от Management Plane конфигурацию, запросы, структурирует их и затем передает для исполнения на Data Plane. Control Plane разделен на Central Control Plane (CCP) и Local Control Plane (LCP). Это разделение значительно упрощает работу CCP и позволяет платформе расширяться и масштабироваться.

Management Plane и Central Control Plane живут на NSX Manager. Local Control Plane располагается на гипервизорах и Edge нодах, он получает от CCP конфигурацию для своего гипервизора или эджа, а затем «инструктирует» нижележащий Data Plane что нужно делать. В NSX-V для CP разворачивался отдельный аплайнс и затем он резервировался кластером из трех нод. В NSX–T CP разделился на CCP и LCP, CCP же как сказано выше расположен на NSX Manager-е и его отказоустойчивость обеспечивается NSX Management Cluster-ом.

Каждый NSX Manager в Management Cluster-е имеет в себе по CCP Conroller-у, которые работают совместно и делят между собой управление Транспортными Нодами и типы конфигурации этих нод.

И если вдруг один NSX Manager с Controller-ом на борту в кластере вдруг упадет, то оставшиеся участники кластера распределят управление «осиротевшими» Транспортными Нодами между собой.

Data Plane – живет у нас на Транспортных Нодах (ТН это гипервизоры и Edge ноды, подготовленные для NSX-T) и занимается простой машинной работой по обработке сетевых пакетов, ему и невдомек о существовании пользователей и всякого «высокого», его мир – это фреймы, заголовки и данные в них, а так же куча всяческих утомительных инструкций, переданных «прорабом» Local Control Plane.

Физическая архитектура NSX-T

Итак, давайте разберемся со всем этим детально. Помимо того, что NSX это виртуальные сети, это еще и вполне себе физические компоненты.

  • NSX Manager
  • Transport Nodes
  • Edge Nodes
NSX Manager

NSX Manager – консоль управления нашего NSX. Это полноценный аплайнс, который вы разворачиваете на гипервизоре и который служит для управления всем NSX. С его установки и начинается процесс развертывания NSX в инфраструктуре. С ним взаимодействует пользователь через web-интерфейс, CLI, API и так далее. Может и должен собираться в отказоустойчивый кластер из трех Manager-ов с виртуальным IP.

На NSX Manager околачиваются Management plane и Control plane, о которых мы говорили выше. Причем они претерпели некоторые изменения по сравнению с классической версией. Management Plane обзавелся компаньоном – Policy Plane, который расположен выше по иерархии. По сути эти две сущности являются одним и тем же, они предоставляют конечному пользователю интерфейс для взаимодействия и «понимают» его хотелки. Так в чем же их различие и зачем нужен этот Policy Plane? Policy Plane – это центральный интерфейс который принимает от пользователя простые запросы без определения что для их достижения нужно сделать. А management plane получает от него конфигурацию и определяет, что нужно сделать, проталкивает конфиги в Control Plane. Интерфейс NSX Manager-а может представляется администратору в двух режимах – policy/manager, отсылая нас к вышеупомянутым плейнам.

Так же у нас есть на борту у каждого NSX Manager-a База Данных, которая реплицируется между участниками кластера.

Transport Nodes

Транспортные Ноды. Это наши гипервизоры – ESXi и KVM, а также Edge Nodes (про них потом). В процессе развёртывания NSX гипервизоры добавляются к NSX. На них устанавливаются соответствующие пакеты, которые делают возможным распределенную обработку сетевого трафика виртуальных машин. Как мы писали выше NSX-T может работать как c ESXi так и с KVM, а так же и с физическими серверами — RHEL, CentOS, Ubuntu, SLES, Windows (да-да, на них ставится специальный агент и они так же могут быть добавлены в NSX как и гипервизоры). Транспортные Ноды после установки необходимого ПО NSX Manager-ом, умеют делать следующие вещи:

  • Распределенная маршрутизация – каждому хосту по самостоятельному кусочку роутера.
  • Распределенный брандмауэр – каждой виртаулке по личному фаерволлу

На этом заканчиваются умения Транспортной Ноды, но не всего NSX. Весь остальной функционал работает на Edge Node (это тоже транспортные ноды, но они «другие»J). Транспортные Ноды после добавления в NSX образуют Транспортные Зоны (это новое понятие о нем позже), в пределах которых и существует виртуальна сеть.

Теперь снова немного о Control Plane и Data Plane. CCP у нас тусит на NSX Manager-е и регулярно получает конфиги от Management Plane, формирует их в более машинное представление и через APH (Appliance Proxy Hub) и далее по протоколу NSX-RPC пуляет на Транспортные Ноды по сети, где NSX-Proxy ловит их и передает на Lосаl Control Plane. LCP это своего рода филиал CP на транспортных нодах, он записывает поступившую информацию в местную БД, под названием NestedDB, это не персистентное место хранение конфигов и после перезагрузки хоста оно пустеет, это скорее временное место куда помещается информация для Data Plane. Он (DP) получает указания от LCP и занимается непосредственно обработкой трафика – делает возможным логический свитчинг, роутинг и файрволл.

Для того что бы все это заработало, NSX-у нужно иметь на Транспортной Ноде управляемый виртуальный коммутатор. NSX-T предоставляет свой «брендовый» N-VDS, который устанавливается на Транспортную Ноду в момент добавления ее к NSX. Что касается vSphere 7.0, то есть возможность использовать для NSX-T 3.0 и новее обычный VDS. Но он должен быть не ниже 7-ой версии. Соответственно у нас должна быть вся инфраструктура vSphere не ниже 7-ой версии. Для всего остального, а это vSphere 6.7 и старше, KVM — мы используем N-VDS.

Edge Nodes

NSX Edge по сути является такой же Транспортной Единицей как и ESXi или KVM хост, но я выделил ее в отдельный топик, потому что Edge является ресурсом для различных сетевых сервисов, указанных ниже.

  • Reflexive NAT
  • Пограничный брандмауэр
  • DHCP Server/Relay
  • L2 VPN
  • IPSec VPN
  • Балансировщики нагрузки
  • DNS Relay
  • Маршрутизация пограничного трафика

То есть все вычисления, связанные с вышеперечисленными сервисами, происходят тут. Виртуальные машины на «эджах» не исполняются. NSX Edge является вычислительным горлышком, через которое сетевой трафик покидает виртуальную сеть или входит в нее из вне. Выходя из аплинков «эджа», сетевой трафик покидает NSX и попадает в реальный мир.

NSX Edge может быть развернут как виртуальный аплайнс или физический сервер. Edge является своего рода гипервизором, но только для сетевых сервисов. В виде виртуальной машины наш эдж может пользоваться всеми прелестями отказоустойчивости и гибкости, но немного теряет в производительности. Будучи физической машиной, эдж может стать производительным «молотильщиком» трафика. Эджи так же можно собирать в кластер до 10 штук, для организации отказоустойчивости сервисов и увеличения пропускной способности. Кластера могут работать в режимах active-active и active-standby. Для базового роутинга нам отлично подойдет первый вариант, который увеличит пропускную способность нашей виртуальной сети в мир с помощью ECMP, а если нам нужны различные statefull сервисы типа NAT или балансировщики нагрузки, пограничные файрволлы, VPN и так далее, то наш выбор это active-stanby. Почему так мы разберем дальше по циклу.

Логическая архитектура NSX-T

Это самая комплексная часть, которая собственно представляет собой архитектуру виртуальной сети. Стоит запомнить, что большинство объектов в NSX-T являются логическими. Это логический роутер, логический свитч и так далее. Хотя в современном NSX-T в контексте Policy mode названия этих сущностей изменили, в большинстве случаев при чтении документации или в разговоре вы будите встречаться с этими понятиями.

Детально мы не будем рассматривать эту архитектуру здесь, потому как этот обзор включает в себя принципы работы и устройство NSX-T в полной мере и включает весь обзор всего функционала этого продукта. Это логический L2 и L3 уровни, сервисы, безопасность и все прочее. Вместо этого мы будем останавливаться на каждом модуле архитектуры отдельно в дальнейшем.

Если вкратце взглянуть на логическую архитектуру NSX-T, то это:

  • Сегменты или логические свичи
  • Tier1 и Tier0 маршрутизаторы/логические роутеры
  • Балансировщики нагрузки
  • Распределенные и пограничные брандмауэры
  • И много всего…

Мне бы очень хотелось разместить тут картинку, которая хотя бы наглядно охватывает это все, но к сожалению все вышеперечисленные вещи — это достаточно сложные сущности, состоящие из множества компонентов, поэтому обойдемся без картинки.

Надеюсь у меня получилось освятить в общих чертах что такое NSX-T, для чего он нужен и что из себя представляет. В дальнейших статьях я планирую познакомить читателя с более глубокими особенностями этого продукта, а также с принципами работы его логических и физических составляющих.

Об авторе

Какой то инженер по виртуализации. В данном контексте это не особо важно. Вы же не за этим сюда пришли, верно?

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *