ESXTOP resources

In my continuing study and expansion of my knowledge of VMware I wanted to go more in-depth on the tools under the covers.  The most important tool for performance tuning and troubleshooting is ESXTOP, which is similar to the TOP command in Linux but is geared toward ESX and ESXi installations.  Instead of regurgitating and paraphrasing what I have found, I will supply the links to the appropriate pages.

First off I find Duncan Epping’s page on ESXTOP outstanding.  Not only does he go through and sum up the counters from the ESXTOP bible, but he also gives you recommended thresholds.  This way you have a point of reference of to help you spot issues right-away.  He also goes on explaining how to run ESXTOP in batch mode and then how to interpret the data using Excel, ESXPlot, and PerfMon.  This is my goto page for immediate reference.

Next up is the ESXTOP bible.  I found two versions of this: one for vSphere 4.0 and one for vSphere 4.1.  I have NOT compared both of them; I have focused mainly on the 4.1 version as this is the environment I am currently supporting.  This page does a deep dive into the counters explaining what they are and how they are derived.

Then I found a handy little reference card that give a short summary of the most important counters to know.

Finally, I found reference to a PowerShell commandlet that allows you to access this tool via a script.  When I looked for more information I found some articles by LucD going in-depth on how to use the commandlet.

Look at these great references and let me know if I missed any other.

Understand and apply LUN masking using PSA-related commands

Per knowledge base article 1009449.

  1. Look at the Multipath Plug-ins currently installed on your ESX with the command:

    # esxcfg-mpath -G

    The output indicates that there are, at a minimum, 2 plug-ins: the VMware Native Multipath Plug-in (NMP) and the MASK_PATH plug-in, which is used for masking LUNs. There may be other plug-ins if third party software (such as EMC PowerPath) is installed. For example:

  2. List all the claimrules currently on the ESX with the command:

    # esxcli corestorage claimrule list

    There are two MASK_PATH entries: one of class runtime and the other of class file.

    The runtime is the rules currently running in the PSA. The file is a reference to the rules defined in/etc/vmware/esx.conf. These are identical, but they could be different if you are in the process of modifying the /etc/vmware/esx.conf.

  3. Add a rule to hide the LUN with the command:

    # esxcli corestorage claimrule add –rule <number> -t location -A <hba_adapter> -C <channel> -T <target> -L <lun> -P MASK_PATH

    The parameters -A <hba_adapter> -C <channel> -T <target> -L <lun> define a unique path. You can leave some of them unspecified if the LUN is uniquely defined. The value for parameter –rule can be any number between 101 and 200 that does not conflict with a pre-existing rule number from step 2.

  4. Verify that the rule has taken with the command:

    # esxcli corestorage claimrule list

    The output indicates our new rule. It is only of class file. You must then load it into the PSA.

  5. Reload your claimrules with the command:

    # esxcli corestorage claimrule load

  6. Re-examine your claim rules and you verify that you can see both the file and runtime class. Run the command:

    # esxcli corestorage claimrule list

  7. Unclaim all paths to a device and then run the loaded claimrules on each of the paths to reclaim them. Run the command:

    # esxcli corestorage claiming reclaim -d <>

    where <> Is the naa id used in step 3. This device is the LUN being unpresented. This command attempts to unclaim all paths to a device and runs the loaded claimrules on each of the paths unclaimed to attempt to reclaim them.

  8. Verify that the masked device is no longer used by the ESX host.

    If you are masking a datastore, perform one of these options:

    • Connect the vSphere Client to the host and click HostConfigurationStorage, then click Refresh. The masked datastore does not appear in the list.
    • Rescan the host by navigating to HostConfigurationStorage Adapters > Rescan All.
    • Run the command:

      # esxcfg-scsidevs -m

      The masked datastore does not appear in the list.

      To verify that a masked LUN is no longer an active device, run the command:

      # esxcfg-mpath -L | grep <>


Determine appropriate RAID level for various Virtual Machine workloads

When you determine the volume layout, you evaluate the type of data to be stored and the number of volumes that you want to create.  Each logical drive should be on a separate volume, for easy future expansion if needed and better performance.


Typically with the operating system and application data, you would use a RAID 5 volume.  For things like transaction logs or volumes requiring a high volume of changes, you should use a RAID 1 or RAID 1+0 volume.

Basic vmkfstools Syntax

Excerpt taken from the ESX MAN Page.


The long and short forms of options, shown here listed together, are equivalent.

-C, –createfs vmfs3

-b, –blocksize #[mMkK]

-S, –setfsname fsName

Create a VMFS file system on the specified partition, e.g. vml.<vml_ID>:1. The partition becomes the file system‘s head partition. The file block size can be specified via the `-b` option. The default file block size is 1MB. The file block size must be either 1MB, 2MB, 4MB or 8MB.

The -S option sets the label of the VMFS file system, and can only be used in conjunction with the `-C` option. This label can then be used to specify a VMFS file system in subsequent vmkfstools commands or in a virtual machine configuration file. The label will also appear in a listing produced by `ls -l /vmfs/volumes`as a symbolic link to the VMFS file system. VMFS labels can be up to 128 characters long. They cannot contain leading or trailing spaces. After creating the file system, the label can be changed using the command `ln -sf /vmfs/volumes/<FS UUID> /vmfs/volumes/<New label>`.

-Z, –spanfs span-partition

Extend the VMFS-3 file system with the specified head partition by spanning it across the partition designated by `span-partition`. The operation erases existing data on the spanned partition. A VMFS-3 file system can have at most 32 partitions. This option will not work on VMFS-2 file systems as they are read-only in ESX 3.

-G, –growfs grow-partition

Extend the VMFS-3 file system with the specified `grow-partition`. Prior to growing the file system, users must use a tool such as `fdisk` or `parted` to create the partition first.  Once the partition size `grow-partition` is available, file system can be grown by designating the `grow-partition` using the option `-G`.  Existing data on the grow partition is preserved.

-P, –queryfs

-h, –human-readable

List the attributes of a VMFS file system when used on any file or directory of a VMFS file system. It lists the VMFS version number, the number of partitions constituting the specified VMFS file system, the file system label (if any), file system UUID, available space, and a listing of the device names of all the partitions constituting the file system. If partitions backing VMFS file system go offline then the number of partitions and available space reported change accordingly. The `h` option causes sizes to be printed in human-readable format (such as 5k, 12.1M, or 2.1G).


-c, –createvirtualdisk #[gGmMkK]

-a, –adaptertype [buslogic|lsilogic|ide] srcFile

-d, –diskformat [thin|zeroedthick|eagerzeroedthick]

Create a virtual disk with the specified size on the VMFS file system. The size is specified in bytes by default, but can be specified in kilobytes, megabytes or gigabytes by adding a suffix of `k`, `m`, or `g` respectively. The `adaptertype` option allows users to indicate which device driver should be used to communicate with the virtual disk. The default disk format is `zeroedthick`.

-U, –deletevirtualdisk

Delete files associated with the specified virtual disk.

-E, –renamevirtualdisk srcDisk

Rename files associated with a specified virtual disk to the specified name.

-i, –clonevirtualdisk srcDisk

-d, –diskformat [rdm:<device>|rdmp:<device>|zeroedthick|thin|eagerzeroedthick|2gbsparse]

Create a copy of a virtual disk or raw disk.  The copy will be in the specified disk format.  The default disk format is pre-allocated.

-e, –exportvirtualdisk dstDisk

This operation is deprecated. Use `-i srcDisk -d 2gbsparse` to achieve what it used to.

-X, –extendvirtualdisk #[gGmMkK]

Extend the specified VMFS virtual disk to the specified length.  You can extend the virtual disk to a `eagerzeroedthick` format, if specified with the `-d eagerzeroedthick` option.  Extending a virtual disk will break any currently existing snapshots. This command is useful for extending the size of a virtual disk allocated to a virtual machine after the virtual machine has been created.  However, this command requires that the guest operating system has some capability for recognizing the new size of the virtual disk and taking advantage of this new size.

-M, –migratevirtualdisk

Migrate an ESX 2 virtual disk to an ESX 3 virtual disk.

-r, –createrdm /vmfs/devices/disks/…

Map a raw disk to a file on a VMFS file system.  Once the mapping is established, it can be used to access the raw disk like a normal VMFS virtual disk.  The `file length` of the mapping is the same as the size of the raw disk that it points to.

-q, –queryrdm

List the attributes of a raw disk mapping.  When used with a `rdm:<device>` specification, it prints out the vml of the raw disk corresponding to the mapping referenced by the <device>.  It also prints out identification information for the raw disk (if any).

-z, –createrdmpassthru /vmfs/devices/disks/…

Map a passthrough raw disk to a file on a VMFS file system.  This allows a virtual machine to bypass the VMKernel SCSI command filtering layer done for VMFS virtual disks.  Once the mapping is established, it can be used to access the passthrough raw disk like a normal VMFS virtual disk.

-v, –verbose #

This option is ignored for the queryrdm option.  Setting the verbosity level will list additional information for the virtual disk configuration.

-g, –geometry

Get the geometry information (cylinders, heads, sectors) of a virtual disk.

-w, –writezeros

Initialize the virtual disk with zeros. Any existing data on virtual disk is lost.

-j, –inflatedisk

Convert a thin virtual disk to preallocated with the additional guarantee that any data on thin disk is preserved and any blocks that were not allocated get allocated and zeroed out.

-k, –eagerzero

Convert a preallocated virtual disk to eagerzeroedthick and maintains any existing data.



Space required for the virtual disk is allocated at creation time


Space required for the virtual disk is allocated at creation time.  In contrast to zeroedthick format, the data remaining on the physical device is zeroed out during creation.


Thin-provisioned virtual disk.


Virtual compatibility mode raw disk mapping.


Physical compatibility mode (pass-through) raw disk mapping.


A sparse disk with 2GB maximum extent size.

Understand and apply VMFS resignaturing

Use datastore resignaturing if you want to retain the data stored on the VMFS datastore copy.
To resignature a mounted datastore copy, first unmount it. Before you resignature a VMFS datastore, perform a storage rescan on your host so that the host updates its view of LUNs presented to it and discovers any LUN copies.


  1. Log in to the vSphere Client and select the server from the inventory panel.
  2. Click the Configuration tab and click Storage in the Hardware panel.
  3. Click Add Storage.
  4. Select the Disk/LUN storage type and click Next.
  5. From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next.  The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
  6. Under Mount Options, select Assign a New Signature and click Next.
  7. In the Ready to Complete page, review the datastore configuration information and click Finish.

The information for this article was gathered from the ESX Configuration Guide.