1

We are running our ERP software on a SQL Server 2016 server as a VM on VMware ESXi. Presently, we have the VM configured with 2 drives, with data going to 1, and logs to the other. We had a problem when the VM became corrupted, and we reverted to a VEEAM backup. The restore took quite a while, but was successfully. Unfortunately, it resulted in the loss of about 4 hrs of data. And since the Transaction Logs went south with the server, we couldn't restore from transaction logs.

So we thought about writing the transaction logs to another network share. But in trying to set it up in a test database, it appears that SQL Server doesn't allow it.

Is there a way to get this done? Is there a significant performance hit? Is there a better approach?

1
  • Why not just copy the logs from the local save to a secondary remote network location ? Commented Jul 24, 2019 at 8:00

1 Answer 1

2

Is there a way to get this done?

Yes, but you probably don't want to do this. You can enable Trace Flag 1807 to get SQL Server to use a UNC path for its data and log files. However, from that documentation:

Microsoft generally recommends that you use a Storage Area Network (SAN) or locally attached disk for the storage of your Microsoft SQL Server database files because this configuration optimizes SQL Server performance and reliability.

(Although the linked documentation indicates that SQL 2016 should allow this by default, so you'll need to elaborate on "it appears that SQL Server doesn't allow it".)

Is there a significant performance hit?

It depends on your hardware. It depends on where the "local" storage is -- spinning disks inside the ESXi host, or on a SAN, or somewhere else. And it depends on the storage that underlies the network share you are considering using. They might end up begin the same hardware, in which case the performance differences might come down to a bit of VMware overhead. Microsoft goes into some detail about performance considerations in the document linked to above.

Is there a better approach?

Yes

The transaction log files themselves (typically *.ldf) aren't what you want to use to restore from a disaster. Having them local to the server is appropriate. And putting them on a separate drive from the data files (typically *.mdf), as you have done, is a good thing.

When you have a disaster and need to restore from a backup, you restore from a Full backup, then possibly from some Differential backups, and then possibly from some Transaction Log backups. Thus, you should be writing your backups to a location away from your server. Or at the very minimum, if you must write them locally, have a process that immediately copies them elsewhere. As you've found, many types of failure cause all local data to be unusable.

There are many tutorials out there about how to establish a proper backup mechanism. Brent Ozar Unlimited has several good ones.

1
  • Doug, thanks for this answer. Great advice, and really appreciate the link to Brent Ozar. Ton of good stuff on his site. I'll be automating the copy of backups, and leaving the full and transnational backups locations where they are. Commented Jul 31, 2019 at 17:32

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.