For a current project at work, I've been stuck using local storage on a number of VMware 5.5 servers since we do not have any shared storage available. I decided to take things into my own hands and do the next best thing. Because we have an older HP ProLiant server available, I decided to load OpenFiler v2.99 onto it. If you're wondering why I'm not using FreeNAS, the reason is simple, the HP server does not allow direct access to the drives, which is something that is needed to setup ZFS in FreeNAS.
The server has 8 total drives; I setup a RAID1 array with 2 drives housing the installation/configuration of OpenFiler and then a RAID5 array with the remaining 6 drives, which will be used to setup the NFS share available to the VMware hosts.
The first challenge came up when attempting to install OpenFiler from a USB drive. Using Rufus to transfer the installation ISO to a flash drive didn't work as intended, apparently you need to copy the ISO over to the flash drive as well. I am thankful for a couple of people that posted this info as it probably would have caused me to lose my sanity otherwise. For more information regarding how to properly create the bootable USB install drive, please see the SECASERVER blog. If you need additional visual representation of what the process looks like, some one was kind enough to post a YouTube video of that as well.
Thankfully after reading these instructions and watching the short video, the install went pretty smoothly. If you'd like to check out some instructions of how to setup OpenFiler NFS or iSCSI with a VMware host, Rob Bastiaansen posted a wonderful guide on the vmwarebits blog. If you'd like to use multiple NICs for redundancy and more bandwidth, you can setup link aggregation between the OpenFiler host and a switch. In my case, I'm using LACP on a Cisco switch. For more info, Conrad Jedynak has the following blog post showing a sample configuration.
Thursday, June 30, 2016
Thursday, June 2, 2016
Solarwinds indexes with fragmentation over 90%
If your Solarwinds NPM deployment database shows indexes with fragmentation over 90% during DB maintenance, there are a few solutions listed on the Thwack Community Forums.
Subscribe to:
Posts (Atom)