Just a single Intel GIGE 10/100/1000 server NIC.īoth arrays have five 2.0TB drives equating to roughly 7.5 TB of space RAID5. I am not using MPIO or any load balancing on NICs. My performance has been substantially different to what others have experienced. The server has 8GB of ram and two single core Intel Xeon processors. I currently use FreeNas 8 with two Raid 5 sSata arrays attached off the server. Running esxtop in disk device mode also should help you getting a clue about what is going on by showing disk utilization statistics from the ESX side. You could easily test if you are hitting your disk's limits by issuing something like iostat -xd 5 on your FreeNAS box and looking at the queue sizes and utilization statistics of your underlying physical devices. Keep in mind however that your data integrity would be at risk if the cache would not be capable of persisting across power outages / sudden reboots. It would alleviate the random-write and sync issues and improve the performance you see in your VM guests. The cure usually prescribed in these situations would be enabling of a write cache which would ignore cache flush requests. Sequential writes to a virtual disk from within a client again would add more "piercing" of your sequential stream due to the guest's file system metadata. Sequential writes to a VMFS datastore on top of a ZFS volume would create a data stream which is "pierced" by metadata updates of the VMFS filesystem structure and frequent sync / cache flush requests for this very metadata. Sequential writes to a ZFS volume would simply create a nearly-sequential data stream to be written to your underlying physical disks. What you are probably seeing is not a translation overhead but a performance hit due to a different access pattern. You can channel bond them to give you almost double the network throughput and some redundancy. If you happen to have multiple NIC interfaces in your supermicro server. You may receive better support there than you will here. Worth looking into/trying.Īlso, consult the freenas forums. There are a number of websites around that claim to help with zfs tuning in general and these parameters/variables may be set through the GUI. But, start with as much RAM as you can fit/afford first. However, it's usually cheaper to add a large amount of fast SSD storage than more primary RAM. Technically speaking, a L2ARC is not needed. This can consist of a good sized SSD e.g. This will speed up writes tremendously.ģ) Add an L2ARC disk. The recommendation is to use 2 ZIL disks for redundancy. If you can increase it at least 8GB or more then you should see quite an improvement.Ģ) Next, I would add a ZIL Log disk, i.e. I'd start with these some incremental improvements.ġ) ZFS needs much more RAM than you have to make use of the ARC cache. To speed this up you're going to need more RAM. I have heard about bad performance of freeNAS's iSCSI, is that it? I have not managed to get any other "big" SAN OS to run on the box (NexentaSTOR, openfiler).Ĭan you see any obvious problems with my setup? I belive there must be something wrong in my system. Of course the is some overhead in raid-z -> zfs -> zvolume -> iSCSI -> VMFS -> VM, but I don't expect it to be that big. That is again about half of the result on a lower level! A similar dd test gave me result of about 15-20 MB/s. There were no other VMs running on the server at the time, and the other two "parts" of the disk array were not used. The VM was a light CentOS 6.0 machine, which was not really doing anything else at that time. Then I tested the IO on a VM running on the same ESXi host. Thats about half of the speed from the freeNAS host! Since the dd there did not give the speed, I divided the amount of data transfered by the time show by time. Other of the iSCSI shared zvols is a datastore for the ESXi. I copied a 4GB (2x the amount of RAM) block from /dev/zero to the disk, and the speed was 80MB/s. I used ddto test the third part of the disks (straight on top of ZFS). I ssh'd into the freeNAS box, and did some testing on the disks. The raid-z volume is divided into three parts: two zvols, shared with iscsi, and one directly on top of zfs, shared with nfs and similar. The disks are connected to a integrated controller (Intel something?). The SAN box itself is a 1U supermicro server with a Intel Pentium D at 3 GHz and 2 Gigs of memory. Those two machines are connected to each other with Gigabit ethernet, isolated from everything else. The storage server has 4x1Tb SATA II disks in Raid-Z on freenas 8.0.4. I have a server with ESXi 5 and iSCSI attached network storage.