Friday, April 30, 2010

iPad as mobile thin client




I had been using Wyse's PocketCloud app for the iPhone for a while, but found the screen real estate on the iPhone too limiting for everyday use - but with the iPad that limitation is gone.
Above are screenshots of PocketCloud connected via VMware View config to a Windows 7 linked clone running on a 2 node View 4 vSphere cluster.
I am using a VPN connection over wifi and administering the cluster via the VI Client!

So far I have not encountered any operations that I can not perform on this platform - PocketCloud provides a robust UI (including popup draggable mouse pointer for fine tuned clicking and right clicking, etc)

With the addition of multitasking rumored in the upcoming v4 of the OS, we can hopefully manage multiple connections and be allowed to leave the session and resume without having to reconnect as is now necessary in v3.x.

Monday, April 12, 2010

Xen + ZFS = The Free VI Experiment

Having wrapped up last year's Physical to Virtual project, I decided to take stock of the progress made in the free/opensource virtualization and storage areas - how would the featuresets of commercial hypervisors and storage stack up against their lowcost/free counterparts?

We are a VMWare + NetApp shop for the most part in our production VI - we like to be able to call support when we have an issue in production

Thanks to the dozens of servers we virtualized, we have plenty of hardware to use for this in the lab.
For this experiment I decided to start with Sun's ZFS. Recently the dedeplication feature was added to ZFS. I downloaded and installed build 129 of Solaris and created a 1 Tb zfs pool, then created filesystems and shared them out via NFS for the Xenservers to use.
I turned on dedup and copied 386Gb of VMs from Netapp volume to the ZFS filesystem.
Here is how the dedup savings stacked up:
ZFS:
fcocquyt@lab-zfs-01:~ 2:05pm 1 > zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data1 928G 103G 825G 11% 4.01x ONLINE -
fcocquyt@lab-zfs-01:~ 2:06pm 2 > zfs list
NAME USED AVAIL REFER MOUNTPOINT
data1/vms 386G 806G 386G /data1/vms
ZFS saved 386-103= 283Gb (283/386 = 73%)


Netapp (ONTap 7.3.1.1):
netapp-01> df -sh
Filesystem used saved %saved
/vol/vm2/ 167GB 229GB 58%


Ok, so ZFS is able to get 73-58=15% better dedup ratio - nice for a free solution!

Xen:
Then I loaded Xenserver 5.5 on 3 old Sunfire x2100's (AMD CPU) each with 2Gb ram
I installed Xencenter and created a pool and used the update tool in Xencenter to update them all to 5.5 U1 and configure the NFS datastore shared from the ZFS system.
I uploaded a CentoS5.3 ISO and Windows 7 ISO and used Xencenter to create a new VM of each flavor.
VM Migration: once I installed xentools I was able to live migrate (VMWare calls this vMotion) the new VMs around the pool's nodes live (each vmotion took < style="font-weight: bold;">What's missing:

In terms of the core featuresets there really is not much missing:
Netapp vs ZFS: both have snapshots, dedup, remote replication
VMWare vs Xenserver: both have snapshots, vMotion, P2V (Have not tried Xen's yet), centralized management

Conclusion: I was very impressed that in less than a day I could setup a free virtual infrastructure in the lab as a proof of concept to compare featuresets with the commercial VI solutions.

Pros and Cons of Blade Solutions for Virtual Infrastructure

In comparing blade vs traditional server solutions for virtual infrastrstructure (VI), there are pros and cons for both. Itemizing the advantages and disadvantages each solution will help an IT department justify and defend the decision to go one direction or the other. Of course Blades may be better suited for one customer and servers a better solution for another, there are many factors considered in making the ultimate decision. Below are a few of the more common aspects to consider when trying to decide on blades or servers.

Definitions:

Blade Server solutions:
In a blade chassis a set of blade servers will conventionally share power and cooling
http://en.wikipedia.org/wiki/Blade_server
More recently there is a new trend to also share networking and storage I/O through converged network adapters (eg Cisco's Unified Computing System (UCS))

Server solutions:
In contrast to the blade solution, each server is an autonomous with respect to its components - power, cooling etc - nothing is shared.

Pros and Cons:

Blade Solution:
Pros:
  • Efficiency: via shared power and cooling blade servers offer better efficiency in these areas
  • Density: blade servers offer higher density per rack U for CPU resources (although this can be a con if your datacenter can not handle the power and cooling density)

Cons:
  • Cost: Requires the additional expense of a chassis to house the blades
  • Lock-in: Chassis represents added level of vendor lockin due to the chassis investment (which may cost as much as $30,000 or several individual servers)
  • more lock-in: related to lock-in are the reduced negotiating power on pricing, and loss of business agility to go with best of breed as easily as when deploying standalone servers
  • All eggs in one basket (if the blade chassis has an issue or needs maintenance, all VMs hosted its blades will be down at once)
Standalone Server Solution:
Pros:
Cost: no chassis to pay for - can take advantage of the latest competitive pricing
Business Agility: allows choice of best of breed technology (without the lock-in of the blade chassis)

Cons:
Efficiency: takes more power and cooling per rack U than the shared blade chassis
Density: offers less rack U CPU resource density