In the last two days, I’ve been part of two discussions, one of which was about the need to run CHECKDB on modern storage (yes, the answer is always yes, and twice on Sundays), and then another about problems with third party backup utilities. Both of these discussions were born out of (at the end of the day) infrastructure teams wanting to treat database servers like web, application, and file servers, and have one tool to manage them all. Sadly, the world isn’t that simple, and database servers (I’m writing this generically, because I’ve seen this issue crop up with both Oracle and SQL Server). Here’s how it happens, invariably your infrastructure team has moved to a new storage product, or bought an add on for a virtualization platform, that will meet all of their needs. In one place, with one tool.
So what’s the problem with this? As a DBA you lose visibility into the solution. Your backups are no longer .bak and .trn files, instead, they are off in some mystical repository. You have to learn a new tool, to do the most critical part of your job (recovering data), and maybe you don’t have the control over your backups that you might otherwise have. Want to stripe backups across multiple files for speed? Can’t do that. Want to do more granular recovery? There’s no option in the tool for that. Or my favorite one—want to do page level recovery? Good luck getting that from a infrastructure backup tool. I’m not saying all 3rd party backup tools are bad—if you buy one from a database specific vendor like RedGate, Idera, or Quest, you can get value-added features in conjunction with your native ones.
BTW, just an FYI, if you are using a 3rd party backup tool, and something goes terribly sideways, don’t bother calling Microsoft CSS, as they will direct you to the vendor of that software, since they don’t have the wherewithal to support solutions they didn’t write.
Most of the bad tools I’m referring to, operate at the storage layer by taking VSS snapshots of the database after quickly freezing the I/O. In the best cases, this is non-consequential, Microsoft let’s you do it in Azure (in fact it’s a way to get instant file initialization on a transaction log file, I’ll write about that next week). However, in some cases these tools can have faults, or take too long to complete a snapshot, and that can do things like cause an availability group to failover, or in the worst case, corrupt the underlying database, while taking a backup, which is pretty ironic.
While snapshot backups can be a good thing for very large databases, in most cases with a good I/O subsystem, and backup tuning (using multiple files, increasing transfer size) you can backup very large databases in a normal window. I manage a system that backs up 20 TB every day with Ola Hallengren’s scripts, and not even storage magic. Not all of these storage based solutions are bad, but as your move to larger vendors who are further and further removed from what SQL Server or Oracle are, you will likely run into problems. So ask a lot of questions, and ask for plenty of testing time.
So if you don’t like the answers you get, or the results of your testing, what do you do? The place to make the arguments are to the business team for the applications you support. Don’t do this without merit to your argument—you don’t want to unnecessarily burn a bridge with the infrastructure folks, but at the end of the day your backups, and more importantly your recovery IS YOUR JOB AS A DBA, and you need a way to get the best outcome for your business. So make the argument to your business unit, that “Insert 3rd Party Snapshot Magic” here isn’t a good data protection solution and have them raise to the infrastructure management.