New iSCSI server – but where’s my old VMFS volume – its Missing!

My existing iSCSI setup wasn’t delivering the I/O I expected so I went about upgrading both my eSATA array controller so I could RAID 10 across the 8 drives I have in my external drive enclosure (rather than the 4 the previous controller would allow) and in addition to that built a new Windows 2008 physical server to drive the I/O (rather than running it off my old Windows XP instance). To do this meant I had to offload all my existing VMFS data to another location temporarily to allow me to recreate the RAID. This was done using a number of external USB HDDs attached to the old iSCSI target server and passing them through as iSCSI targets to VMware. The VMs were then sVMotioned between iSCSI datastores until the external enclosure was free!

I installed Starwind (my iSCSI target software of choice) on the new server and hooked up the USB HDDs. I then proceeded to reconfigure it to represent these iSCSI HDDs to VMware.

I rescanned the iSCSI adapter but to my surprise couldn’t see the VMFS volume – only the LUN itself. Having worked with resignaturing in the past, I realised that the volume must still be there lurking in the background, it was merely being masked by VMware because it believed it was a snapshot because it was previously presented to the host under a different iSCSI IQN.

So without further ado, I ssh’d over to the TSM and ran the following command to confirm my thoughts:-

esxcfg-volume -l

The output produced the following:

VMFS3 UUID/label: 49d22e2e-996a0dea-b555-001f2960aed8/USB_VMFS_01
Can mount: Yes
Can resignature: Yes
Extent name: naa.60a98000503349394f3450667a744245:1 range: 0 – 397023 (MB)

Good news for me – the old named VMFS volume was still visible.  

So, to re-add this back into the Storage view so that I could Storage vMotion the VMs back to my new 8 disk RAID setup, I ran the following command

esxcfg-volume -M USB_VMFS_01

(you can specify -m if you only wish to mount it once. -M mounts the drive persistently).

Tada! VMFS volumes all present and correct.

I’m now seeing a HUGE performance gain from using the 8 disks and I’m going to try my hardest to push the limits of the 1GB iSCSI connection before I consider adding a second NIC for Round Robin on both the VMware hosts and iSCSI target server.