Here are some quick thought, remarks, things I picked up today in a random order. The moment I'm home with a decent internet connection, I might update some of this info.
- From this morning's key note: 3% of all energy generated goes into our data centers. This is the reason why green data centers are important.
- As you can read elsewhere, Novell has acquired PlateSpin. Let's hope prices will drop.
- Site Recover Manager is nice, but does not offer anything that can not be done by hand or script.
- File system alignment tends to become more and more important. Two of the speakers today claimed it to be very important for file system performance. Just make sure you use the VI Client to create the file systems, it does the alignment for you.
- Some Linux kernel use a Khz CPU timer, which causes CPU overhead in the guest. The kernel boot option 'divider' modifies this behavior.
- A lot of people were interested in the talk discussing differences between FC SAN, iSCSI (SW & HW) and NFS. NFS, however, was not really covered in this talk. All in all, there were no big surprises here: FC is usually better than all the other alternatives, especially for large block sizes (less SCSI overhead), software iSCSI uses more CPU cycles than hardware iSCSI, etc. A whitepaper has been published with this info, but I don't have the link ready.
- iSCSI has been optimized for 8K block sizes, as this is block size is encountered a lot. The result is clearly reflected in the stats.
- An experimental tool is available to analyze guest disk I/O statistics. It basically creates histograms of throughput, latency, average read/write distance, etc. The command line is 'vscsiStats'. I could not yet test it out yet, as I don't have an ESX server in my hotel room. This alone makes it worth being here...
- In order to troubleshoot SAN performance issues, allocate a small LUN (e.g. 100MB), so that everything can be cached. This way, you avoid effects of physical disks, spindles, etc.
- In order to use the enhanced vmxnet driver in 3.5, you need to first remove the existing vNIC and add a new one. Then you can select the new enhanced interface with support for all the new features.
- When setting up network failover policies, it is important to take into account the fact that by default the spanning tree protocol takes 30 seconds to open the uplink port on a physical switch. During this time, the virtual switch sees the link (to the physical switch) as up. 30 seconds is twice the default timeout for VMware HA. Rebooting a switch may cause a lot of havoc in this case
As you might notice, I am particularly interested in everything that relates to performance. Furthermore, I have a lot of references to interesting KB articles, but I need to check them out first before posting any info.