How to Upgrade from Windows vCenter 5.5 to VCSA 6.5 including SRM

In last couple of days, I heard this question many times for below scenario. Since 5.5 is about to expire this month, so IT admins are upgrading their environment to new version of vSphere. Though going directly to 6.7 is something that do not meet N-1 requirement for most of the environment. Hence most of the admin prefers to put environment on vSphere 6.5.

Very first question everyone think about is –

  • Does upgradation path from vCenter 5.5 to VCSA 6.5 is supported? Answer is YES.
  • Does upgradation path from VMware SRM 5.8.1 to 6.5 is supported? Answer is NO. You need to upgrade VMware SRM 5.8.1 to 6.0 or 6.1.2, and then upgrade SRM 6.0 to 6.5.

Here we have details of existing environment and requirement which need to meet.

Existing Environment: –

  • Windows based vCenter Server – Version 5.5
  • Site Recovery Manager – Version 5.8
  • Replication Type in SRM – vSphere Replication

Requirement: –

  • Appliance based vCenter Server (VCSA) – Version 6.5
  • Site Recovery Manager – Version 6.5
  • Replication Type in SRM – vSphere Replication

Sequence to upgrade Windows based vCenter 5.5 to VCSA 6.5: –


If you are planning to upgrade your vSphere environment, follow this order to upgrade vCenter server, SRM, and vSphere Replication.

Overview of Upgrade order: –

  • Since vCenter server 5.5 doesnt have PSC server, you need to make sure that you install PSC when prior to upgrade vCenter server.
  • Upgrade PSC and vCenter Server in Protected Site.
  • Upgrade vSphere Replication appliance in Protected Site.
  • Upgrade Site Recovery Manager in Protected Site.
  • Perform the same steps in Recovery Site.
  • If you are using array based replication in SRM, you need to upgrade SRA in both site.
  • Once up gradation sequence has been done, verify vCenter server and SRM sites status.
  • Upgrade ESXi host in both protected and recovery sites.

Step by Step Guide to Upgrade vSphere Environment: –

Prerequisites: –

  • Download vCenter Server Appliance ISO image, VMware SRM 6.5 setup, and vSphere Replication appliance from VMware download portal.
  • Ensure that you have a windows machine from where you will initiate the installation.
  • Ensure to get SSO credentials and VCDB and SRM DB database handy.
  • Note down the details of ESXi Host where you will deploy VCSA appliance.
  • If you want to use the same name of vCenter Server which you are using currently, you need to rename windows vCenter VM with alternate name.
  • If you are using VMware SRM 5.8.1, then you need to upgrade it to VMware SRM 6.0 and then you can upgrade it to Vmware SRM 6.5.

vCenter Server Upgrade: –

There are two stage process to upgrade vCenter Server.

  1. Deployment of VCSA
  2. Migration of Windows vCenter Data to newly deployed VCSA.


Stage 1:

  • Now mount VCSA image ISO in any windows machine. Explore the ISO folder and navigate to VMware-Migration-Assistant.exe. Right click and click on Run as Administrator.
  • Follow the steps and provide SSO credentials.
  • Once you will get black screen with a message – Waiting for migration to Start, Switch to ISO folder again.
  • Go to vcsa-ui-installer/win32 folder and click on Installer and Run as administrator.
  • On windows screen, you will get four options.
    • Install
    • Upgrade
    • Migrate
    • Restore
  • Click on Migrate and follow the steps to complete the deployment of VCSA.

Stage 2:

  • Once deployment gets complete, you need to switch to stage 2. Here you need to migrate existing windows vCenter server data to VCSA.
  • During the data migration the Windows vCenter will be shutdown and the VCSA will be configured with its IP address.
  • Follow the steps and complete stage 2. Now you can access vCenter server using Web client.


Upgrade vSphere Replication Appliance: –

Download vSphere replication appliance and upgrade appliance by following below VMware document.

Upgrade VMware Site Recovery Manager: –

Follow below article to upgrade VMware Site Recovery Manager.







How VMware HA Works | Deep Dive

Overview of HA (High Availability): –

  • When you creates HA cluster very first time, then Virtual Machines are configured with cluster default settings.
    • VM Restart Priority
    • Host Isolation Response
    • VM Monitoring
  • There is master host election when the cluster is first created. All other hosts are slaves.
  • Master host is responsible for monitoring the host connectivity with slave host.
  • Master host also deals with different possible issues that can happen.
    • Host get network isolated.
    • Host fails (Hardware or other problem).
    • Host loses connection to the master host.
  • For Virtual Machine monitoring, there are three options.
    • Leave running (Default)
    • Shutdown (Required VM Tools)
    • Power Off

Component of HA: –

  • FDM
  • Hostd
  • vcenter

FDM (Fault Domain Manager): –

  • Communicating host resource information, VM state, and HA properties to the other hosts.
  • It also handles heartbeat mechanism.HA deep dive
  • It provides VM placement and VM restart.
  • HA has no dependency on DNS. It works on IP Address. This is improvement in FDM.
  • FDM directly talk to hostd and vCenter
  • FDM is not dependant on VPXA.
  • You can check FDM logs – fdm.log in /var/log/

HOSTD Agent: –

  • It is agent which is installed on ESXi host.
  • Responsible for many task like power on Virtual Machine
  • If HOSTD is unavailable or not running, host will not participate in any FDM related process.
  • FDM relies on HOSTD for information about the VM that are registered to the host, and manager VM through HOSTD API.
  • FDM is dependant on HOSTD. If HOSTD is not operational, FDM halts all functions and wait for HOSTD to become operations.

Use of vCenter in HA: –

  • Deploying and configuring HA Agent.
  • Communication of cluster confiugration change
  • Protection of VM
  • Pushing out the FDM to the ESXi hosts.
  • Communicate configuratoin changes in the cluster to the host.
  • HA leverage vCenter to retrieve information about the status of VM.

Fundamental Concepts of HA: –

  • Master/Slave Agent

  • Heart-beating

  • Isolated vs Network Partitioned

  • VM Protection

  • Component Protection

Understand Master Host: –

  • HA Architecture includes the concept of Master and Slave HA agent.
  • There is only one master slave in a HA cluster, except during Network Partition scenario.master host
  • Master is responsible for monitoring the health of VM
  • Restart any VM which fails.
  • Slave pass information to master.
  • HA agent also implements the VM/App monitoring feature which allows it to restart virtual machine in case of a OS or restart service in case of application failure.
  • Master Agent keep track of the VM for which it is responsible for, and take action when appropriate.
  • Master will claim responsibility for a VM by taking ownership of the datastore on which VM configuration file is stored.
  • Master is responsible for exchanging state information with vCenter Server.
  • Send/Receive information to vCenter when required.
  • Master host initiate the restart of VM when host failed.

What if Master Fails?

master fails

HA election occurs when you enable HA on VMware Cluster and master host:-

  • Fails
  • Become Network partition or isolated.
  • Disconnect from vCenter Server.
  • Put in maintenance or standby.

HA election takes 15 seconds to elect slave as a master. It works over UDP protocol.

Make the election process on the basis of highest number of datastore.

If two or more host has some number of datastore, the highest/largest MOID will get preference. It’s done on basis of lexically MOID. Let’s take a value of MOID of two Hosts 99 and 100. Here 9(99) is greater then 1(100)(9 >1). In this example, 99 is largest MOID.

When master is elected, it will try to acquire the ownership of datastore which it can directly access by proxying request to one of the slave connected to it using the management network.

For regular storage architecture, it does this by locking a file called “Protected List”.

Naming format and location for this Protected List file is as below.

./vSphere HA/ <Cluster Specific Directory>/ProtectedList

Structure of cluster specific directory.

<UUID of VC> -<Number of the MOID of Cluster>-<Random 8 character string>-<Name of the host running VC>

Understand Slave Host: –

  • It monitors state of Virtual machine and inform Master host.
  • Monitor health of master by monitoring heatbeat.
  • Slave host sends heartbeat to master so that master can detect outage.

Local Files for HA: –

When HA is configured on a host, the host will store specific information about it’s cluster locally.

  • Cluster Config
    • It’s not human readable.
    • It contains configuration details of cluster.
  • vmmetadata
    • This file is also not human readable.
    • It contains actual compatibility information matrix for every HA protected VM and list all with which it is compatible.
    • Metadata includes the custom properties, descriptions, tags, owner, cost center, etc regarding a Virtual Machine.
  • fdm.cfg
    • Configuration setting and logging and syslog details are store in this file.
  • hostlist
    • A list of host participating in the cluster, including hostname, IP address, MAC address and heartbeat datastore.

Understand Heartbeating: –

Mechanism used by HA which check if host is alive.

There are two types of Heartbeat.

  1. Network Heartbeat
  2. Datastore Heartbeat

Network Heartbeat: –

  • It use by HA to determine if a ESXi host is alive.
  • Slave send network heartbeat to master and master to slave.
  • It send heartbeat by default every second.

Datastore Heartbeat: –

  • It add on extra level of resiliency and prevent unnecessary restart attempts.
  • Datastore heartbeat enables a master to determine the state of a host that is not reachable via management network.
  • By default there are two datastores get selected. But it can be possible to add more datastores. You can do this by following string in Advance options. Valid values can range from 2-5 and the default is 2.
  • Selection process gives preference VMFS datastores over NFS.

Network Isolated vs Partitioned Network Partitioned: –

Isolated vs partitioned


  • When it doesn’t observe any HA management traffic on management network and it cannot ping the configured isolation network address.
  • Host is isolated only when host inform the master via the datastore that is isolated.


  • When host is operational but cannot reach over management network.
  • There will be multiple masters in case of network partitioned.


How vMotion Works ?

What is vMotion?

  • vMotion  enables live migration of a running virtual machine between ESXi Hosts
  • It is transparent to the Virtual Machine’s OS and applications.
  • Invaluable tool to admin to achieve the followings;
    • Avoid Server Downtime
    • Allow Troubleshooting
    • Provide Flexibility
  • Key enabler of DRS, DPM, and FT

vMotion works

What needs to be migrated?

  • Processor and devices state  – CPU, Network, SVGA
  • Disk – Use shared storage between source and destination
  • Memory – Pre copy memory while Virtual Machine is running

How vMotion Works?

  • Quiesce Virtual Machine on source machine.
  • Transfer memory and device state(checkpoint) from source to destination.
  • Resume Virtual Machine on Destination
  • Copy remainder of memory from source to destination
  • Free Virtual Machine resources on source machine.

vMotion Step by Step

Other Interesting Facts, Problems, and Troubleshooting: –

  • Virtual  Machine remains suspended during memory transfer
  • Copying Virtual Machine with large memory size may problematic.
  • 64 GB Virtual Machine requires around 57 seconds on 10 GbE NIC.
  • VMotion will check the remote system to make sure there is enough RAM and CPU before it begins the process.

Troubleshooting: –

  • Migration ID is same on source and destination.
    • Go to VMkernel log (/var/log/vmkernel.log)
    • Grep the migration ID for all vMotion related timing and statistics.

That’s it from here. Stay connected.


Understand L1 Terminal Fault (L1TF) – Impact and Mitigation Plan for VMware Admins

Overview of L1TF Vulnerabilities: –

Intel has disclosed on 14th Aug 2018 new class of three CPU speculative-execution vulnerabilities within its server, client and workstation processors, known as “L1 Terminal Fault (L1TF)” which can occur on past and current Intel processors (from at least 2009 – 2018)


The processor vulnerability goes by L1TF, L1 Terminal Fault, and Foreshadow. Researchers who discovered the problem back in January and reported it to Intel called it “Foreshadow”. It is similar to vulnerabilities discovered in the past such as Spectre and Meltdown.

The new L1 Terminal Fault vulnerability involves a security hole in the CPU’s L1 data cache, a small pool of memory within each processor core that helps determine what instruction the core will execute next. L1 Terminal Fault vulnerability can occur when affected Intel microprocessors speculate beyond an unpermitted data access.

VMware has defined below mentioned four categories for such vulnerabilities.

  • Hypervisor-Specific Mitigations prevent information leakage from the hypervisor or guest VMs into a malicious guest VM. These mitigations require code changes for VMware products.
  • Hypervisor-Assisted Guest Mitigations virtualize new speculative-execution hardware control mechanisms for guest VMs so that Guest OSes can mitigate leakage between processes within the VM. These mitigations require code changes for VMware products.
  • Operating System-Specific Mitigations are applied to guest operating systems. These updates will be provided by a 3rd party vendor or in the case of VMware virtual appliances, by VMware.
  • Microcode Mitigations are applied to a system’s processor(s) by a microcode update from the hardware vendor. These mitigations do not require hypervisor or guest operating system updates to be effective.

What is affected by L1TF: –

This vulnerability is Intel-specific. Other processors such as AMD are not affected.

What is impacted

Intel’s previously released microcode updates are expected to lower the risk of data exposure for consumer and enterprise users running non-virtualized operating systems, Hence, there are no significant performance impacts have been noted with this particular mitigation. For virtual machines, however, the risk is higher.

Three CVEs have been assigned to this issue:

L1TF Vunerabilities

  • CVE-2018-3615 for Intel Software Guard Extensions (L1 Terminal Fault-SGX)

    • Systems with microprocessors utilizing speculative execution and Intel SGX may allow unauthorized disclosure of information residing in the L1 data cache from an enclave to an attacker with local user access via side-channel analysis.
    • Does not affect VMware products and/or services.
    • Reference Link:
  • CVE-2018-3620 for operating systems and System Management Mode (L1 Terminal Fault-OS/ SMM)

    • Systems with microprocessors utilizing speculative execution and address translations may allow unauthorized disclosure of information residing in the L1 data cache to an attacker with local user access via a terminal page fault and side-channel analysis.
    • Virtual Appliances are impacted. List of unaffected appliances can be found from here. is recommended to contact 3rd party vendors for appliance patches.
    • Products that ship as an installable windows or linux binary are not directly affected.
    • May also be applicable to customer-controlled environments running in a VMware SaaS offering. Refer to
    • Other Reference Link:
    • Requires Operating System-Specific Mitigations.
  • CVE-2018-3646 for impacts to virtualization (L1 Terminal Fault-VMM)

    • This is specific to Virtual environment and impacting your Virtual machines.
    • Systems with microprocessors utilizing speculative execution and address translations may allow unauthorized disclosure of information residing in the L1 data cache to an attacker with local user access with guest OS privilege via a terminal page fault and side-channel analysis.
    • It impacts hypervisors. It may allow a malicious VM running on a given CPU core to effectively infer contents of the hypervisor’s or another VM’s privileged information residing at the same time in the same core’s L1 Data cache.
    • Requires Hypervisor-Specific Mitigations for hosts running on Intel hardware.

Affected Products: –

The following Intel-based platforms are potentially impacted by these issues.

Intel® Core™ i3 processor (45nm and 32nm)
Intel® Core™ i5 processor (45nm and 32nm)
Intel® Core™ i7 processor (45nm and 32nm)
Intel® Core™ M processor family (45nm and 32nm)
2nd generation Intel® Core™ processors
3rd generation Intel® Core™ processors
4th generation Intel® Core™ processors
5th generation Intel® Core™ processors
6th generation Intel® Core™ processors **
7th generation Intel® Core™ processors **
8th generation Intel® Core™ processors **
Intel® Core™ X-series Processor Family for Intel® X99 platforms
Intel® Core™ X-series Processor Family for Intel® X299 platforms
Intel® Xeon® processor 3400 series
Intel® Xeon® processor 3600 series
Intel® Xeon® processor 5500 series
Intel® Xeon® processor 5600 series
Intel® Xeon® processor 6500 series
Intel® Xeon® processor 7500 series
Intel® Xeon® Processor E3 Family
Intel® Xeon® Processor E3 v2 Family
Intel® Xeon® Processor E3 v3 Family
Intel® Xeon® Processor E3 v4 Family
Intel® Xeon® Processor E3 v5 Family **
Intel® Xeon® Processor E3 v6 Family **
Intel® Xeon® Processor E5 Family
Intel® Xeon® Processor E5 v2 Family
Intel® Xeon® Processor E5 v3 Family
Intel® Xeon® Processor E5 v4 Family
Intel® Xeon® Processor E7 Family
Intel® Xeon® Processor E7 v2 Family
Intel® Xeon® Processor E7 v3 Family
Intel® Xeon® Processor E7 v4 Family
Intel® Xeon® Processor Scalable Family
Intel® Xeon® Processor D (1500, 2100)

** indicates Intel microprocessors affected by CVE-2018-3615 – L1 Terminal Fault: SGX

How to Mitigate L1TF in your VMware Environment: –

Need to ensure that all virtualized operating systems have been updated. Additional steps include turning off hyper-threading in some scenarios and enabling specific hypervisor core scheduling features.

Mitigation of CVE-2018-3615 (L1 Terminal Fault – SGX) – {No action for VMware Admins}:

  • CVE-2018-3615 does not affect VMware products and/or services. Hence no mitigation is required for Vmware admins.

Mitigation of CVE-2018-3620 (L1 Terminal Fault – OS) – {More Specific to 3rd party Vendors}:

  • Mitigation of CVE-2018-3620 requires Operating System-Specific Mitigations.  Impact may have on Virtual Appliances and VMware SaaS Offerings.
  • Products that ship as an installable windows or linux binary are not directly affected, but patches may be required from the respective operating system vendor that these products are installed on.
  • It is recommended to contact 3rd party vendors for appliances and SaaS offerings for mitigation plan of CVE-2018-3620. Like if you are using a Cisco virtual appliance, then you need to contact Cisco vendor for the mitigation plans.
  • For VMware appliances, Vmware has listed the unaffected appliance here.

Mitigation of CVE-2018-3646 (L1 Terminal Fault – VMM) – {More specific to VMware Admins}:

  • Mitigation of CVE-2018-3646 requires Hypervisor-Specific Mitigations for hosts running on Intel hardware.
  • As a Vmware admin, you need to focus more on VCE-2018-3646 as it is directly impacting Hypervisors and Virtual machines which have Intel CPU. Affected product list may be find out above.Mitigation Structure
  • Sequential-context attack vector: a malicious VM can potentially infer recently accessed L1 data of a previous context (hypervisor thread or other VM thread) on either logical processor of a processor core.
  • Concurrent-context attack vector: a malicious VM can potentially infer recently accessed L1 data of a concurrently executing context (hypervisor thread or other VM thread) on the other logical processor of the hyperthreading-enabled processor core.

There are three phases to mitigate CVE-2018-3646 as mentioned below.

  1. Update Phase
  2. Planning Phase
  3. Scheduler-Enablement Phase

L1TF Mitigation Phases

Above three mitigation phases for CVE-2018-3646 is defined below in more descriptive way:

To summarize this, here is quick step:

  • Update vCenter Server using Product listed in VMSA
  • Patch your ESXi Hosts using Product listed in VMSA.
  • Enable ESXi Side-Channel-Aware Scheduler using vSphere Web Client.
    • Set option to True of VMkernel.Boot.hyperthreadingMitigation in Advance Setting of ESXi Host.

Mitigation Plan

You can refer to VMware KB Article 55806 to see the depth mitigation plan for CVE-2018-3646.

Your Environment is secure now. Enjoy the monsoon.

That’s all from this article. If you want to add your inputs, please do share in comment box.


Share this article to others if you found it useful.





Learning Modules on Kubernetes for Beginners

Good Day All! Hope you are enjoying your weekend.

In last couple of days, I was exploring Kubernetes in my Lab environment. I explored many new things as a beginner. I tried to gather all W-H questions which may ask or come into your mind if you are thinking to start Kubernetes.


Based on that, I prepared a learning module on Kubernetes Specially for Beginners. If you are new to Kubernetes, you might have lot of queries such as How and Where to Start it from scratch. I segregated all such queries in different topics and prepared a complete learning module specially for techies who just jumped or thinking to start this hot selling technology.

Here is complete module of Kubernetes for Beginners: –


Topic 1 – What is Kubernetes? | Learn Kubernetes – Part 1

Topic 2 – Components and Architecture of Kubernetes | Learn Kubernetes – Part 2

Topic 3 – Versions of Kubernetes | Learn Kubernetes – Part 3

Topic 4 – Kubernetes Terminology every admins need to know | Learn Kubernetes – Part 4

Topics 5 – Getting Start to Setup and Configure Kubernetes | Learn Kubernetes – Part 5

Topics 6 – How to Install Kubernetes on Windows 10 with Hyper-v using Minikube| Learn Kubernetes – Part 6

In next few days, I am going to add more topics here whatever I get to know as a beginners. I will get back to you through this blog. Please stay tuned.

If you wants to add some topics here or if I did some mistake, please do not hesitate to share in commend box or email us at


If you think that it is useful, please do share with others.

vSphere 6.7 ICM – Topic 5.2 – Configure virtual switch security , traffic -shaping and load -balancing policies

Continuing to vSphere 6.7 Install, Configure, and Manage modules, today we are going to cover vSphere networking which is one of the tough parts to know for VMware admins and have lot of difficulties while applying the network policies.

Points to Cover: –

  1. Understand Network Policies
  2. Security Polices
  3. Traffic Shaping
  4. Teaming & Failover
  5. Understand Load Balancing Policies
  6. What is MTU (Maximum Transmission Unit)

In last section, we learnt the concepts of vSphere Standard Switch and How to Create a vSwitch in a vSphere Environment. In this section, we are going to explore Configuring virtual switch security , traffic -shaping and load -balancing policies.

  • Standard Switch policies are configured for enhancing the security of complex virtual environment in a better way.
  • You can create multiple port groups in a standard switch, and then you can apply different policies at each port groups.
  • You can also apply same network policies for all port groups or standard switch.

Network Policies Applies at: –


How to Apply these Policies?

  • Login to vCenter server using Web client.
  • Click on host and go to Networking under Configure Tab.
  • Select Virtual Switch and Click on Pencil icon to Edit.

vSphere Standard Switch has following Network Policies:

  • Security

  • Traffic Shaping

  • Teaming & Failover

  • MTU

Security: –

2018-07-28 16_46_48-vSphere - - Virtual switches

  • Promiscuous Mode (Accept or Reject)

    • It can be defined at Virtual Switch or Port Group level.
    • If you change to accept then the guest OS can recieve all traffic which passes through the vSwitch or portgroup.
    • When promiscuous mode is enabled at the portgroup level, objects defined within that portgroup have the option of receiving all incoming traffic on the vSwitch.
    • When promiscuous mode is enabled at the virtual switch level, all portgroups within the vSwitch will default to allowing promiscuous mode. However, promiscuous mode can be explicitly disabled at one or more portgroups within the vSwitch.
    • By default, this policy is set to Reject on virtual switches (standard or distributed)
    • Let’s take an example that we have two port groups PG-A and PG-B. In PG-A, we have two Virtual Machines as VM-1 and VM-2. In PG-B, we have another two Virtual machines as VM-3 and VM-4.
    • If Promiscuous mode is set to Reject, PG-A and PG-B will not send traffic across and will only deliver packet as point to point delivery.
    • But if you set it to accept mode, than it will transfer the traffic to both PG-A and PG-B and it’s VM-1, VM-2, VM-3, VM-4.

promiscuous mode reject

  • MAC Address Changes (Accept or Reject)

    • This security policy is enabled by default in standard switch and disabled in Distributed Switch.
    • If it is in accepted mode, then host accepts requests to change the effective MAC address to different one than the original.
    • MAC Address Changes is concerned with incoming traffic.
    • All virtual machines have two MAC addresses:
      1. Initial MAC – It is generated automatically and that resides in the configuration file(VMX file). Guest OS has no control over the initial MAC Address.
      2. Effective MAC – It is configured by the guest operating system that is used during communication with other virtual machines. The effective MAC address is included in network communication as the source MAC of the virtual machine. Sometimes you put a manual MAC address in a VM, that is a effective MAC.
    • Let’s take an example, you have a Virtual machine with Initial MAC address 00:50:56:AF:3C:FG. Now, due to any reason you changed the MAC address of Virtual machine and Effective MAC address get change to 00:50:56:AF:3C:EH.
    • Virtual Machine’s Initial Address and Effective Address must agree with each other. If the guest OS changes the Effective Address, the port will compare the Effective Address to the Initial Address.
    • If security policy MAC Address Changes is set to Reject, then Initial Address and Effective Address will not agree with each other and it will result that Port will be administratively down.
    • If security policy MAC Address Changes is set to Accept, then new Effective MAC address will be agree to Initial MAC and it will be automatically updated in ARP table and Virtual Machine will work as usual.
  • Forged Transmits (Accept or Reject)

    • In this case, a host do not compare source and effective MAC which are transmitted from a VM.
    • Forged transmits is concerned with outgoing traffic.
    • If Forged Transmits is set to Reject, then traffic will not be passed from the virtual machine to the vSwitch (outgoing) if the initial and the effective MAC addresses do not match.
    • MAC Address Changes and Forged transmits are also used by Windows as a mechanism to protect against duplicate IP addresses on the network. If a Windows system detects an IP address conflict it will send out a forged transmit to reset the IP to the original MAC of the machine it think originally owned it and then take itself off the network. This protection mechanism for duplicate IP addresses won’t work without these security settings allowed.
    • It is set to Accept on Standard Switch and Reject on Distributed Switch.

Traffic Shaping: –

Traffic Shaping is the feature to control the quantity of traffic that is allowed to flow across a link. That is, rather than letting the traffic go as fast as it possibly can, you can set limits to how much traffic can be sent.

2018-07-28 16_46_51-vSphere - - Virtual switches

You can configure a traffic shaping policy for each port group in Standard or Distributed Switch.

Traffic shaping is applied for outbound network traffic on standard switches and inbound and outbound traffic(Ingress or Egress traffic shaping) on distributed switches.

Traffic Shaping is defined by:

traffic shapping

  • Average bandwidth (100000 Kbits/Sec)

    • Establishes the number of bits per second to allow across a port, averaged over time.
    • This number is the allowed average load.
    • By default, traffic will get bandwidth what is defined in Average bandwidth.
  • Peak bandwidth (100000 Kbits/Sec)

    • Maximum number of bits per second to allow across a port when it is sending or receiving a burst of traffic.
    • This number limits the bandwidth that a port uses when it is using its burst bonus.
    • Average bandwidth can be exceed when needed by specifying a higher “Peak Bandwidth” value.
  • Burst size(102400 Kbytes)

    • Maximum number of bytes to allow in a burst that is allowed to be transmitted at the peak bandwidth rate in kilobytes.
    • When the port needs more bandwidth than specified by the average bandwidth, it might be allowed to temporarily transmit data at a higher speed if a burst bonus is available. So, when you need to send more traffic than the average bandwidth value allows, you transmit a burst of traffic, which is more than the allowed average bandwidth.
    • Traffic will be allowed to burst until the value of “Burst Size” has been exceeded.

Teaming & Failover: –

2018-07-28 16_46_56-vSphere - - Virtual switches

  • Load Balancing Policy:

    • Route based on the originating virtual port ID

      • Each virtual machine has a virtual port ID on vSwitch. Port ID of a virtual machine is fixed while the virtual machine runs on the same host. If you migrate, power off, or delete the VM, its port ID on the virtual switch becomes free and port ID get change in next power on.
      • The vSwitch selects uplinks based on the virtual machine port IDs.
      • This load balancing method is used by default on Standard and Distributed Switches.
    • Route based on IP hash

      • Load balancing is based on the source/destination IP addresses.
      • vSwitch selects uplinks for virtual machines based on the source and destination IP address of each packet.
      • In IP Hash load balancing policy all physical switch ports connected to the active uplinks must be in EtherChannel mod.
      • This load balancing should be set for all port groups using the same set of uplinks.
      • Physical adapters attached on vSwitch must be in Active/Active.
      • Beacon probing is not supported with IP Hash.
    • Route based on source MAC hash

      •  Load balancing is based on Virtual machine’s MAC Address.
      • To calculate an uplink for a virtual machine, the virtual switch uses the virtual machine MAC address and the number of uplinks in the NIC team.
    • Use explicit failover order

      • It is based on Route Based on Originating Virtual Port. Virtual switch checks the load of the uplinks and takes steps to reduce it on overloaded uplinks.
  • Network Failure Detection Policy:

    • Link Status only

      • It is basically use to check the link if physical NIC is Up or down.
      • This option detects failures, such as cable pulls and physical switch power failures, but not configuration errors, such as a physical switch port being blocked by spanning tree or mis-configured to the wrong VLAN or cable pulls on the other side of a physical switch.
    • Beacon Probing

      • Beacon Probing is about checking of the health and connectivity between each vmnic (physical NIC) in the same vSwitch.
      • This option detects many of the failures in depth that are not detected by link status alone.
      • ESXi will send a small packet out of it’s physical network card, and see if this packet is received by the other physical network card within the same vSwitch.  If the vmnic receive the packet, it means that the connectivity between these two physical network is healthy.
      • You must have 3 Physical Network Port in the same vSwitch before you turn on Beacon Probing.  The reason is because if you have 2 Physical Network Port in the same vSwitch, and Beacon packet cannot reach each other, switch cannot determine which NIC needs to be taken out of service.
      • Do not use IP hash for load balancing.
  • Notify Switches Policy: (Yes/No)

    • By setting up Notify switches policy to “Yes”, you can determine how the ESXi host communicates failover events.
    • It is also used for updating MAC address information on physical switches.
  • Failback Policy: (Yes/No)

    • It uses when a failed physical NIC returns online, the vSwitch sets the NIC back to active by replacing the standby NIC.
    • By Default it is set to Yes.
  • Failover Order Policy:

It specifies how to distribute the work load for adapters.

    • Active Adapters

      • vSwitchContinue to use the adapter when the network adapter connectivity is available and active.
    • Standby Adapters

      • vSwitch uses this adapter if one of the active adapter’s connectivity is unavailable.
    • Unused Adapters

      • When a physical adapter is added to this section, vSwitch do not use this adapter.

What is MTU (Maximum Transmission Unit)?

  • A MTU (maximum transmission unit) is the largest size packet or frame, specified in octets (eight-bit bytes), that can be sent in a packet- or frame-based network such as the Internet.
  • Default size of MTU is 1500 Bytes which can be increased up to 9000 Bytes.
  • Jumbo Frames can be enabled on a vSwitch, vDS, and VMkernel Adapter.

2018-07-28 16_46_45-vSphere - - Virtual switches

That’s all from this topic. As vSphere Networking is a complex and interesting topic, so I am planning to write a separate blog with detailed information on each points mentioned above. Stay tuned for coming blogs.

Thanks for visiting here. Share this article if you found it useful. Be sociable.

How to – Install VCSA 6.7 (Virtual Center Server Appliance)

Step by Step Procedure to Install VCSA 6.7

vCenter Installation Type: –

  1. Windows Based Virtual Center
  2. VCSA (Virtual Center Server Appliance)

In this blog we are covering installation of VCSA which is appliance based Virtual Center. There are lot of benefits using VCSA. One of them are saving Operating System license cost and SQL Database license cost. Other benefit is that it provides native High Availability for Virtual Center. We will cover VCSA benefits in another post.

Installation Mode: –

  1. Embedded Platform Controller Services (PSC)
  2. External Platform Controller Services

Prerequisites: –

  • ISO Image for VCSA 6.7
  • Base machine from where you will execute the installer.
  • ESXi Host on which appliance will be deployed as vCenter.

Hardware Requirements for a vCSA 6.7 with an Embedded or External Platform Services Controller

System requirement for VCSA 6.7 Embedded Mode

System Requirements for the base machine for GUI and CLI Installer: –

System Requirement for GUI and CLI Installer

Steps by Step procedure to Install VCSA: –

  • Download VMware 6.7 ISO Image from VMware Portal.
  • Mount ISO image to any of windows machine from where you need to execute installer.


  • Go to VMware VCSAvcsa-ui-installer- win32 Folder.
  • Click on Installer file.


  • There are two stages in VCSA Installation.
    • Stage 1 – Deploy Appliance
    • Stage 2 – Configuration Appliance
  • We will proceed with Stage 1 then Stage 2 in sequence.
  • There are four options in installation screen under stage 1.
    • Install
    • Upgrade
    • Migrate
    • Restore
  • Click on Install option as we are going to install new vCenter.


  • Introduction screen will appear, Click Next.


  • Accept the license agreement, Click Next.


  • Select deployment type. Here we are going to choose Embedded platform Services Controller and External Platform Services Controller.
  • Then Choose Embedded Platform Services Controller, Click Next.


  • Specify the appliance deployment target.
    • Esxi host or vCenter Server name
    • Type Host name or IP address to
    • Default it will show HTTPS port 443
    • Type Username and password
  • Click on Next.


  • Click on yes to accept the Certificate.


  • Give the name of vCenter Appliance.
  • Set User Name and Password for VCSA.
  • Click on Next.


  • Select deployment size according to your requirement.
  • Click on Next.


  • Select datastore.


  • Configure Network Settings.
    • Choose Network
    • IP Version
    • IP assignment
    • FQDN
    • IP address
    • Subnet mask or prefix length
    • Default gateway
    • DNS servers
  • Click on Next.


  • Now it is ready to start the deployment of appliance in stage 1.
  • Click on Finish.


  • Deployment of vCenter Server appliance is in progress.


  • Deployment has been completed. Click Continue to move to next stage.


  • Stage 2:- Setup vCenter Server Appliance with an Embedded PSC.


  • Click on Next.


  • Here it will show two option to configure SSO configuration.
    • Create a new SSO domain
    • Join an existing SSO domain
  • Select option to Create a new SSO domain, and provide below information.
    • SSO user name
    • SSO password
    • Confirm password
  • Click on Next.


  • Click on Join the Vmware’s Customer Experience Improvement Program (CEIP).
  • Click on Next.


  • Stage 2 is Ready to Complete.
  • Click on Finish.


  • Click on OK


  • Setup is in progress.


  • Click on close.


Installation and configuration of VCSA has been completed.

Thanks for visiting here. Share this article if you found it useful. Be sociable.