VCAP5-DCA study notes: Objective 1.1


1.1 – Implement and Manage Complex Storage Solutions


Skills and Abilities
Determine use cases for and configure VMware DirectPath I/O
Determine requirements for and configure NPIV
Determine appropriate RAID level for various Virtual Machine workloads
Apply VMware storage best practices
Understand use cases for Raw Device Mapping
Configure vCenter Server storage filters
Understand and apply VMFS re-signaturing
-Understand and apply LUN masking using PSA-related commands
-Analyze I/O workloads to determine storage performance requirements
-Identify and tag SSD devices
Administer hardware acceleration for VAAI
-Configure and administer profile-based storage
Prepare storage for maintenance
Upgrade VMware storage infrastructure


Determine use cases for and configure VMware DirectPath I/O

General information:
-PCI and PCIe devices supported
-Max. six devices can be connected to a VM
-Not supported: Snapshots/vMotion/FT/HA/DRS/Suspend & Resume/Hot add
-Intel VT-d / AMD IOMMU needs to be enabled in host BIOS
-VM needs to be HW version >= 7
-Adding PCI device to VM creates memory reservation
-vSphere client: Green icon = device active & enabled / Orange icon = change made & reboot required

Use cases:
-Saves CPU cycles on workloads with very high packet rates
-VM can use hardware features (TCP Offload Engine, SSL offload), which are not supported by vSphere

Configuration:

To configure pass-through devices on an ESX host:

  1. Select an ESX host from the Inventory of VMware vSphere Client.
  2. In the Configuration tab, click Advanced Settings. The Pass-through Configuration page lists all available pass-through devices.
  3. Click Edit.
  4. Select the devices and click OK.
  5. When the devices are selected, they are marked with an orange icon. Reboot for the change to take effect. After rebooting, the devices are marked with a green icon and are enabled.

Note: The configuration changes are saved in the /etc/vmware/esx.conf file. 

To configure a PCI device on a virtual machine:

  1. From the Inventory in vSphere Client, right-click the virtual machine and choose Edit Settings.
  2. Click the Hardware tab.
  3. Click Add.
  4. Choose the PCI Device.
  5. Click Next.



Determine requirements for and configure NPIV

General Information:
-vMotion supported
–VM retain assigned WWN
–If target host not supporting NPIV, failback to physical HBA
–RDM file must located on same datastore as VM config files
-Cloning a VM with NPIV does not clone the assigned WWN
-Storage vMotion is not supported
-Up to 16 WWN pairs can be created, 2 WWNs are minimum for failover conifugration

Use case:
-VM can meet security guidance by zoning a VM directly to storage
-VM can benefit from QoS prioritizing

Requirments:
-Can only be used for VM´s with RDM disks
-HBAs on host must support NPIV
-SAN Fabric must be NPIV-aware
-Storage config: NPIV LUN number and target ID must be identical with physical LUN number and target ID
-Vmware does not support heterogeneous HBAs on the same host for access to the same LUN
-Zoning must exist for both (physical host WWN  and NPIV WWN) & all path should be zoned

Configuration:

  1. Create VM with Custom wizard and RDM as disks.
  2. On the Ready to Complete page, select the Edit the virtual machine settings before completion checkbox and click Continue.
  3. Click the Options tab, and select Fibre Channel NPIV.
  4. Select Generate new WWNs.
  5. Specify the number of WWNNs and WWPNs.
  6. Click Finish.


Determine appropriate RAID level for various Virtual Machine workloads

The impact of the storage performance and the correct implementation / configuration is often underestimated by many VMware administrators. To cover this topic of the blueprint, I would recommend to have a good understanding of RAID level and IOPS (incl. the calculation).

If you feel uncomfortable with RAID level in general you will find here (http://en.wikipedia.org/wiki/Standard_RAID_levels) a quite good overview on Wikipedia.

Also I think you should have at least a good basic understanding of the whole IOPS topic. There are many great resources out there, which you should definitely take a look on, if you feel to get more in deep knowledge about IOPS and the whole calculation stuff about it.

http://www.yellow-bricks.com/2009/12/23/iops/
http://en.wikipedia.org/wiki/IOPS
http://www.techrepublic.com/blog/datacenter/calculate-iops-in-a-storage-array/2182
http://www.pqr.com/images/stories/Downloads/whitepapers/deep%20impact%20eng.pdf
http://vmtoday.com/2009/12/storage-basics-part-i-intro/

If you haven´t use the VMware Capacity Planer jet, it´s definitely worth talking a look on it. There´s also a free training about the Capacity Planer available (at least in the partner portal).


Apply VMware storage best practices

As mentioned on the topic before the impact of the storage infrastructure is often underestimated by many VMware administrators. It is a huge topic and for sure nothing you can describe / list in just some points.

I listed the statements about storage best practices, which I found in official VMware documents:

-Configure and size storage resources for optimal I/O performance first, then for storage capacity.
– Aggregate application I/O requirements for the environment and size them accordingly.
– Base your storage choices on your I/O workload.
– Remember that pooling storage resources increases utilization and simplifies management, but can lead to contention.
-For the best storage performance, consider using VAAI-capable storage hardware.
-Ordinary VMFS is recommended for most virtual disk storage, but raw disks might be desirable in some cases
-The alignment of file system partitions can impact performance. Using the vSphere Client to create VMFS partitions avoids this problem since, beginning with ESXi 5.0
-Do not use Fixed path policy for Active/Passive storage arrays to avoid LUN path thrashing (exception: Active/Passive storage arrays that support ALUA).
– To optimize storage array performance, spread I/O loads over the available paths to the storage system
-Deploy a VLAN just for the ESXi host’s vmknic and the iSCSI/NFS server to minimize network interference from other sources


Understand use cases for Raw Device Mapping

General information:
-An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device.
-The RDM allows a virtual machine to directly access and use the storage device.

rdm

Two compatibility modes are available for RDMs:
Virtual compatibility mode allows an RDM to act exactly like a virtual disk file, including the use of snapshots.
Physical compatibility mode allows direct access of the SCSI device for those applications that need lower level control.

rdm2

Use case:
-When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN.
-In any MSCS clustering scenario that spans physical hosts — virtual-to-virtual clusters as well as physical to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as virtual disks on a shared VMFS.
-Makes it possible to use the NPIV technology that allows a single Fibre Channel HBA port to register with the Fibre Channel fabric using several worldwide port names (WWPNs).
-Makes it possible to run some SAN management agents inside a virtual machine (or kind of SCSI target-based software).

If you haven´t done anything with RDM´s yet, I strongly recommend to read the whole RDM chapter in the official documentation, which you can find here.


Configure vCenter Server storage filters

vCenter provides you several storage filters, to protect you from storage device corruption or performance degradation. These filters are available by default and can be disabled via advanced settings (see below).

The following four filters are available within vCenter 5.0:

config.vpxd.filter.vmfsFilter

config.vpxd.filter.rdmFilter

config.vpxd.filter.SameHostAndTransportsFilter

config.vpxd.filter.hostRescanFilter



The following table from the official documentation shows the particular description:

storage_filters

Configuration:

  1. Open the vSphere Client
  2. Go to Administration -> vCenter Server -> Advanced Settings
  3. If the key config.vpxd.filter.xxxxxxx is not available add it and set it to false


Understand and apply VMFS re-signaturing

General information:
-Each VMFS datastore has a unique UUID that is stored in the file system superblock.
-When a VMFS datastore is duplicated (replica, snapshot etc.) also the UUID is copied. This also applies on LUN ID change, on SCSI device type change (SCSI-2 to SCSI-3) or at SPC-2 compliancy enablement
-ESXi detects duplicated datastores and offers you two options: Keep or change UUID / signature

Use case:

Keep signature:
-In case of a disaster recovery you may mount a replicated datastore at a secondary site. It´s important to keep the signature for a “smooth” failback to the primary site.

Note: You are only able to mount a duplicated datastore, when the original datastore is offline.

Resignature a datastore:
-If you want to keep the datastore copy or / and want to bring it online, while the original one is also online, a resignaturing is required.

Consider the following points:
– Datastore resignaturing is irreversible.
– The LUN copy that contains the VMFS datastore that you resignature is no longer treated as a LUN copy.
– A spanned datastore can be resignatured only if all its extents are online.
– The resignaturing process is crash and fault tolerant. If the process is interrupted, you can resume it later.
– You can mount the new VMFS datastore without a risk of its UUID colliding with UUIDs of any other
datastore, such as an ancestor or child in a hierarchy of LUN snapshots.

Configuration:

  1. Click the Configuration tab and click Storage in the Hardware panel.
  2. Click Add Storage.
  3. Select the Disk/LUN storage type and click Next.
  4. Select the LUN that has a datastore name displayed (should be indicated as copy) in the VMFS Label column and click Next.
  5. Under Mount Options, select…
    a) Existing Signature
    b) Assign a New Signature
  6. Click next and review the information on the last page before click Finish.


Understand and apply LUN masking using PSA-related commands

Comming soon….

Analyze I/O workloads to determine storage performance requirements

Comming soon….

Identify and tag SSD devices

Comming soon….

Administer hardware acceleration for VAAI

Meanwhile VAAI has become standard with vSphere 5.0 and modern storage systems. If you feel uncomfortable about it, I strongly recommend you in addition to my sum up to read at least the following two documents:
-Official Documentation – Storage Guide Page 173-181
-Technical Whitepaper – VMware vSphere Storage APIs – Array Integration (VAAI)

General information:
-Several tasks can perform faster and more efficiently with hardware acceleration (VAAI)
-Storage hardware needs to support VAAI
-Since 5.0 also NAS devices are supported
-Block Primitives: ATS / XCOPY / Write Same / UNMAP
-NAS Primitives: Full File Clone / Fast File Clone / Extended Statistics / Reserve Space
-Build into the Pluggable Storage Architecture (PSA):

vaai_PSA


Configuration:
For each VAAI primitive one advanced option exists:

ATS
/VMFS3/HardwareAcceleratedLocking

XCOPY
/DataMover/HardwareAcceleratedMove

WRITE_SAME
/DataMover/HardwareAcceleratedInit

UNMAP
/VMFS3/EnableBlockDelete



Check status of these options:

# esxcli system settings advanced list –option

Enable (1) or disable (0) option:

#esxcli system settings advanced set –int-value 0 –option


Alternatively you can also check / modify these options via the GUI.


Configure and administer profile-based storage

Comming soon….

Prepare storage for maintenance

General information:
-No virtual machines reside on the datastore.
-The datastore is not part of a datastore cluster.
-The datastore is not managed by Storage DRS
-Storage I/O control is disabled for this datastore.
-The datastore is not used for vSphere HA heartbeating.

Configuration:

  1. Select datastore from list
  2. Right-click the datastore and select Unmount
  3. Specify hosts you want to unmount (if shared storage, default all) and click Next
  4. Review and click Finish

Note: Unmounted VMFS datastores appears inactive / grey in the datastore list, while NFS datastores disappears.

Upgrade VMware storage infrastructure

General information:
-To upgrade a VMFS2 datastore, you need to upgrade to VMFS3 first (on a ESX <= 4.1 host)
-Locking mechanism ensure that datastore isn´t in use during upgrade process
-After upgrading to VMFS no downgrading is possible
-Only ESXi 5.x hosts can access VMFS5 datastores
-Verify to have minimum 2MB space available and 1 free file descriptor before upgrade
-Upgraded VMFS5 is differs from newly formatted VMFS5 datastore (see screenshot):

compare_vmfs

Configuration:

  1. Click the Configuration tab and click Storage.
  2. Select the VMFS3 datastore you want to upgrade.
  3. Click Upgrade to VMFS5.
  4. Click OK to start the upgrade.
  5. Perform a rescan on all hosts that are associated with the datastore.


Comments

  1. HI,

    I am preparing for VCAP- DCA 55. I searched for one of the question and found your article. Its very informative. Could you please let me know if you have article for all the objectives of VCAP-DCA 55. I searched for the other objectives in vknowledge.net but I could not find.

    • vMario156 says:

      Hi Gowtham,
      I didn´t finish this serie, sorry. But there are a bunch of good study guides out there!
      VMware annouced today the retierment of some exams. The VCAP-DCA will only be availible until June 4th (source: http://blogs.vmware.com/education/2016/03/6734.html). Keep this in mind durring your exam plans, may it make sense to switch over to the lastest version of the exam (VCAP6).
      Regards,
      Mario

Speak Your Mind

*