Lately I’ve dealt with a problem about thin-provisioned disks in my vSphere environment. Although there is the feature in VMware, I choose to leave thin provisioning to back-end, to storage administrators. The usage may go beyond your control in short time and you may not be able to receive disk resource during that time. It is just about to be on the safe side: “I do not use thin provisioning in VMware virtual disks.”
Unfortunately before me one datastore was over-provisioned (over 200%!!) and left unattended. It was only a short time after taking over and clients decided to use the resource which was promised to them. We received a call about a VM that went unresponsive and as we dig we understood the severity of the situation.
In short, we received new LUNs and moved VMs to new datastores. I took an action to convert all the thin provisioned disks to thick. It is officially done with right clicking the thin provisioned vmdk file in the datastore browser and choosing “Inflate”:
However this triggers a conversion to eager zeroed thick provisioned disk. It takes more time and creates unnecessary I/O on storage side. To convert the provisioning of disks to lazy zeroed thick I initiated Storage vMotion with the appropriate virtual disk format selection:
After the successful migration the virtual disks of VM is lazy zeroed thick provisioned. GUI may still show “Used Space 0.00B” about VM. A refresh on the Datastore Summary / Capacity Usage page should correct the glitch.
To automate the process you can use the following command in PowerCLI:
Move-VM -VM thinvm -Datastore differentdatastore -DiskStorageFormat Thick