VM-flat.vmdk file – check.. but where’s the vmdk file?

So after spending many hours migrating all my VMs back to their uber performing datastores, I went to power on my secondary DC only to find it would not start up.

Something to do with a missing file.

“The system cannot find the file specified.
Cannot open the disk ‘DC02.vmdk’ or one of the snapshot disks it depends on.
VMware ESX cannot find the virtual disk “DC02.vmdk”. Verify the path is valid and try again. ”

I immediately checked the configuration settings in vCenter and all appeared correct. The datastore browser confirmed that it could see the 10GB vmdk file – so what could it be?

I never trust a GUI, so ssh’d over to the TSM and did a quick directory listing to find that whilst the -flat.vmdk file was there, the .vmdk file wasn’t! In the migration back, somehow the VM had lost the file that controls its understanding of disk geometry, controller type, provisioned format (thin/thick).

Knowing I had the -flat file was re-assuring, had the shoe been on the other foot and all I was left with was the vmdk file, I would have been a little lot more concerned.

The first step to resolution was to create a new identically sized virtual disk to the -flat file I had been left with. In turn, this would then create a new VMDK file that I could borrow.

1) Determine the existing -flat.vmdk file size

ls -l *-flat.vmdk
-rw——-    1 root     root         4841537536 Feb 24 23:38 DC02-flat.vmdk

2) Determine the controller type associated with this disk

less *.vmx | grep -i virtualdev

scsi0.virtualDev = “lsilogic”

In this instance the VM used the lsilogic controller

3) Armed with this information, I now create a new vmdk

vmkfstools -c 4841537536 -a lsilogic -d thin temp.vmdk
(the -d thin parameter provisions the disk as thin as we don’t really want the -flat file anyway)

4) The result is a temp-flat.vmdk and a temp.vmdk file.

5) Rename the temp.vmdk file to match the VM name – in this case DC02

mv temp.vmdk DC02.vmdk

6) Edit DC02.vmdk using VI and update the extent description contents to match the server name

# Extent description
RW 20971520 VMFS “tempDC02-flat.vmdk”

7) If the original -flat.vmdk was thinly provisioned you do not need to modify any additional parameters in the file, however if it was thick, you must remove the following line:-

ddb.thinProvisioned = “1”

8) Delete the temp-flat.vmdk created in step 3 and you should be good to go!

New iSCSI server – but where’s my old VMFS volume – its Missing!

My existing iSCSI setup wasn’t delivering the I/O I expected so I went about upgrading both my eSATA array controller so I could RAID 10 across the 8 drives I have in my external drive enclosure (rather than the 4 the previous controller would allow) and in addition to that built a new Windows 2008 physical server to drive the I/O (rather than running it off my old Windows XP instance). To do this meant I had to offload all my existing VMFS data to another location temporarily to allow me to recreate the RAID. This was done using a number of external USB HDDs attached to the old iSCSI target server and passing them through as iSCSI targets to VMware. The VMs were then sVMotioned between iSCSI datastores until the external enclosure was free!

I installed Starwind (my iSCSI target software of choice) on the new server and hooked up the USB HDDs. I then proceeded to reconfigure it to represent these iSCSI HDDs to VMware.

I rescanned the iSCSI adapter but to my surprise couldn’t see the VMFS volume – only the LUN itself. Having worked with resignaturing in the past, I realised that the volume must still be there lurking in the background, it was merely being masked by VMware because it believed it was a snapshot because it was previously presented to the host under a different iSCSI IQN.

So without further ado, I ssh’d over to the TSM and ran the following command to confirm my thoughts:-

esxcfg-volume -l

The output produced the following:

VMFS3 UUID/label: 49d22e2e-996a0dea-b555-001f2960aed8/USB_VMFS_01
Can mount: Yes
Can resignature: Yes
Extent name: naa.60a98000503349394f3450667a744245:1 range: 0 – 397023 (MB)

Good news for me – the old named VMFS volume was still visible.  

So, to re-add this back into the Storage view so that I could Storage vMotion the VMs back to my new 8 disk RAID setup, I ran the following command

esxcfg-volume -M USB_VMFS_01

(you can specify -m if you only wish to mount it once. -M mounts the drive persistently).

Tada! VMFS volumes all present and correct.

I’m now seeing a HUGE performance gain from using the 8 disks and I’m going to try my hardest to push the limits of the 1GB iSCSI connection before I consider adding a second NIC for Round Robin on both the VMware hosts and iSCSI target server.

vSphere 4.1 U1 – available!

Today I awoke to discover that vSphere 4.1 U1 is now available for download. The details of the announcement were pretty straight forward:

* Support for up to 160 logical processors
* Inclusion of additional drivers
* Enablement of Intel Trusted Execution Technology (ESXi only)
* Additional guest operating system support
* Bug and security fixes

VMware vCenter
* Additional guest operating system customization support
* Additional vCenter Server database support
* Bug and security fixes

VMware vCenter Update Manager
* The VMware vCenter Update Manager Utility to help users
reconfigure the setup of Update Manager.
* Bug and security fixes.

VMware vCenter Orchestrator
* Bug Fixes

=== VMware ESX ===

=== VMware ESXi ===

=== VMware vCenter ===

=== VMware vSphere 4.1 Update 1 is available for download ===

I’ll be taking a deeper look at the associated fixes and will get this into the lab asap to determine whether there are any noticeable updates worth reporting back!