First of word of warning on this post—if you are reading it and it isn’t January of 2017, I suspect things may have changed significantly in the months going forward.
So I did It, I built a SQL cluster on Linux. The process is documented here on BOL, I’m not going to walk you through it, I’ll probably do that in a later post, I just wanted to mention some things I ran into during this build process. First, I did this using VMWare Fusion on my Mac, but I think any virtualization platform that allows virtual networks should work. Secondly, even though BOL says you need Red Hat Enterprise Linux (and you do if you are doing this in prod and require support), I was able to do all of this on CentOS, which is the free as in beer version of RHEL.
In my scenario, I built 3 VMs, one to serve as an NFS server, the other two to be my SQL Servers. Currently, there is no cluster version of the install, it’s the standard installation for standalone SQL on Linux, you then point SQL Server at the NFS mount you created which serves as your shared storage. I had an initial permissions problem on writing my data files there—I did a bad thing on the NFS server and opened up the directory to the world (777), and was then able to copy files there. I’ll follow up on that.
One other thing that wasn’t in BOL, that I had to troubleshoot my way through is that just like a cluster on Windows, you have a cluster identifier and floating IP address. I had to add that to /etc/hosts on each of my nodes to get it to resolve. The article mentions turning off fencing for non-prod environments—I had to do that in order to get failover working correctly in my environment.
Finally, failover was a bit wonky at first, and I had to spend too much time troubleshooting an odd problem. I wrote a connect item for it., but select @@servername and select name from sys.servers returns the name of the host, and not the cluster name. I’m sure the team will fix this in the near future.