Enable Round Robin Multipathing by Default




I will be enabling Round Robin Native Multipathing on all of the hosts. This will allow new datastores to be presented without additional configuration. Here is how it is done.


Login to host via putty.


List out the Storage Array Type Plugins (SATP)




Look at the existing datastores and note the Storage Array Type.






esxcli nmp satp setdefaultpsp –psp VMW_PSP_RR –satp VMW_SATP_DEFAULT_AA


To verify that is now defaulted to multipathed:


Type the below command and you will see the SATP_DEFAULT_AA is now round robin




Related articles





By Jeff Johnson

Failed Vmotion

So I had something I had to deal with today that I hadn’t dealt with in awhile.  A failed Vmotion.  The VM would fail every time with the following error:

Migration to host  failed with error Already disconnected (195887150)

This happened on VMs coming from this particular host, but vmotioning to the host was successful.  Strange.

Did all the regular troubleshooting including:

1) vmkping to and from the vmkernel port used for vmotion

2) verify DNS on each side

3) restart the management agent on the host in question

4) Verified time on the host

5) Turned off Enable Migration in the Advanced Settings of the host and re-enabled it.

6) Unchecked vmotion on the vmkernel port and re-enabled it.

Most of this stuff is pretty standard in the great KB article by VMware  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003734

Bottom Line is this is a networking issue, the question was where?

Since I read someone state in a post to check for a duplicate IP, I changed the IP of the vmkernel port and what do you know?,  it started working.  I then changed it back to the original and it worked fine.  I guess the port group just needed reinitialized because if it was a duplicate IP it wouldn’t have worked either way.

By Jeff Johnson

Resource Pools vs. Folders

I started a new position in September and the person before me had organized everything by Resource Pools instead of Folders.  I decided to do a write up explaining why this was a problem waiting to happen.

I want to revisit the way that we have things organized in our Vmware environment.  Specifically with regard to folders in VMs and Templates view vs. Resource Pools in Hosts and Clusters view.

Let’s start with an excerpt from a blog post below.

Many organizations have the bad habit to use resource pools to create a folder structure in the host and cluster view of vCenter. Virtual machines are being placed inside a resource pool to show some kind of relation or sorting order like operating system or types of application. This is not reason why VMware invented resource pools. Resource pools are meant to prioritize virtual machine workloads, guarantee and/or limit the amount of resources available to a group of virtual machines.

During design workshops I always try to convince the customer why resource pools should not to be used to create a folder structure. The main object I have for this is the sibling share level of resource pools and virtual machines.

Pasted from <http://frankdenneman.nl/2010/09/resource-pools-and-simultaneous-vmotions/>

Presently, we have resource pools for Dev, QA, Stage, and Production.  The theory was to prioritize the resource pools according to what is higher priority.

The issue is that the resource pools do not take into account the number of VMs within them.  The result is that even though a particular resource pool may have a higher level of shares, but by the time the pool is subdivided and finally the VM ends up with fewer shares than a VM that resides in a resource pool with lower number of shares.

Let’s look at a simple example first, then we will apply it to our environment.

Consider a “Test” Resource Pool with 1000 shares of CPU and 4 VMs, vs. a “Production” Resource Pool with 4000 shares of CPU and 50 VMs.

“Test” 1000 shares, 4 VMs => 250 units per VM (small pie, a few big slices):

“Production” 4000 shares, 50 VMs => 80 units per VM (bigger pie, many small slices):

Pasted from <http://www.yellow-bricks.com/2010/02/22/the-resource-pool-priority-pie-paradox/>

So as you can see above, the Production resource pool doesn’t take into account the number of resources below it.

Ultimately what matters is the number of shares that the VM has.

Further note about shares and how they come into play.

Shares are only supposed to come into play when there is contention.  There is always competing of resources in Vmware.  We are overallocated on memory on a few hosts.  This is not a bad thing as what really needs to be monitored is active memory on the host.  However if the shares for the VMs aren’t allocated properly, this can be an issue.  You end up with non-production VMs with a higher priority than more critical VMs as you will see later.

Whenever a VM gets migrated to a new host, it looks at the other VMs on the host and then recalculates the resources it has allocated to it.  Ultimately, resources are allocated based on what the host it resides on has.

Let’s take a look at how this applies to our environment.

See the current Resource Pool layout from the root of the cluster.

Now let’s take a look at the child resource pools underneath the Production pool.  Notice that each resource pool has the same number of shares.  This doesn’t take into account the number of VMs underneath each one.  So the more VMs in a particular pool, the fewer shares per VM.

Now if we drill down into SQL Servers we see how many servers are in this pool and it is treated no differently than  FTP Servers resource pool which 1) Doesn’t need as many resources  2) Only has 3 active servers.

Notice the worst allocation column in SQL Servers

Now look at the Worst Allocation column in FTP Servers below.   The last FTP server has nearly 5 times the shares as most SQL Servers.

Take a look at Application Servers Worst Case Allocation.  Because there are so many servers they are further subdivided.  So the result is a resource pool with only a server or two gets more resources.

Let’s look at some pie charts to further visualize how the pie gets divided and divided.

See the below division of shares at the root of the cluster.

Notice below how everything is treated the same regardless of number of VMs within them as well as not all categories  should necessarily get the same priority.

From the small slice of SQL Servers from above that further gets split up below.

So if we calculate out

Production  44%

SQL Servers  7%

Average SQL Server 5%

The end result  .44 x .07 x .05 = 0.0015  .15%  

Less than a quarter percent.  I don’t think that is what was intended.

These are the servers that require the highest load.

Folders- Much Simpler

Folders are much less complicated as they are simply a structure and nothing else.   You can have as many clusters as you want and its transparent in the VM and Templates view where folders reside.

There are no potential adverse affects with regard to resource allocation with resource pools.

You divide and subdivide to your heart’s content.  You can organize the way you see fit and for the right reasons.

You can update your VMs for Vmware Tools and Virtual Hardware Upgrades with Update Manager.  When VMs are organized in resource pools, it becomes next to impossible to update VMs with Update Manager with a single list of hundreds of VMs.

Should we use resource pools at all?

This is a subject that is up for debate.  Resource pools without a doubt complicate your resource configuration.  The biggest issue is that it needs to be constantly looked at.  What might make sense at the beginning very well might not make sense 6 months later.

Another thing to be careful of is if VMs get placed into the wrong pool and they happen to end up as a sibling of another child pool, that further subdivides both the pools at the same level as well as all the VMs under each pool.  This can happen while manually vmotioning VMs and picking the wrong pool.

So what if someone says to you, “Well, that only matters when there is contention.”

Can any of us really predict when there will be contention?  The answer is no.  If you happen to a have a host issue (or two), depending on the size of your environment, all of a sudden your hosts are running at much higher percentages than you thought.  How about a scenario where someone wants you to build an additional 20 VMs in your environment in the next month?  (because VMs are free right?   🙂 )

I think we get the picture here.

So take a look at your environment if you have resource pools and take a look at Worst Case Allocation column.  You might be in for a surprise with regard to what you originally intended.

By Jeff Johnson

Hello world!

Welcome to WordPress.com. After you read this, you should delete and write your own post, with a new title above. Or hit Add New on the left (of the admin dashboard) to start a fresh post.

Here are some suggestions for your first post.

  1. You can find new ideas for what to blog about by reading the Daily Post.
  2. Add PressThis to your browser. It creates a new blog post for you about any interesting  page you read on the web.
  3. Make some changes to this page, and then hit preview on the right. You can always preview any post or edit it before you share it to the world.
By Jeff Johnson