Sitefinity Partners share insights on best practices for website disaster recovery and backup scenarios

Sitefinity Partners share insights on best practices for website disaster recovery and backup scenarios

Posted on October 31, 2011 0 Comments

The content you're reading is getting on in years
This post is on the older side and its content may be out of date.
Be sure to visit our blogs homepage for our latest news, updates and information.

I asked our Sitefinity partners the question “What do you recommend as best practices around disaster recovery and backup/recovery of a Sitefinity website?” and here is what they shared:

Jochem Bokkers Jochem Bökkers

LinkedIn
VIFE, Sitefinity Consulting and Development


I don't know if it qualifies as 'best practices' and it's definitely not Sitefinity specific but a combination of SQL scripts and regular database backups in combination with a SVN repository is what we use.

During development/testing

We regularly create schema and data SQL scripts from the SQL management studio, preferably before every commit or major Sitefinity change (like adding a new language to the Front End website or before adding custom fields) and add those to the Visual Studio project as solution items (so they'll get tucked into source control). It's way faster and easier than make fully qualified SQL backups and to restore those.

During production

We've setup a 'production-mirror' internal IIS/SQL-instance, where we keep local copies of sites going live. For sites that remain under our 'supervision' we waste one Saturday a month syncing live sites back to the internal mirror. SQL files and www files are grouped in a hierarchical folder structure so we can easily add/update a special repository for them.

For the SQL files we again generate schema & data files, so we can easily 'deploy' a state for testing/development. Depending on where the site is hosted we try to setup the SQL server to incorporate some backup schedule.

Syncing data

Red Gate is naturally the preferred tool (compare & data compare) of choice to synchronize data but if that’s not a valid option, we then use the SQL Management Studio export option, which naturally doesn't allow the flexibility Red Gate tools offer.

SQL Server 'Denali' (the CTP3 version of the new SQL Server) fortunately incorporates most of the functionality that Red Gate Tools offer, so within a few months everything can be done from within SQL Server or by adding a 'Database' project to the Sitefinity solution in Visual Studio. We've been running SQL 'Denali' and 'Juneau' (the Visual Studio 2010 bits) on several machines for a few months now and are quite excited and content with it.

Restoring the data

We do hosted backups on a daily basis, and once a month do offline backups as described. Meaning if the data-center burns down or the hosting partner goes bankrupt we lose one month of data, not more.

Fortunately we've yet not had a production site fail yet but during development and testing we get quite good at breaking the database.

Most of the times we've just swiped the database clean and run the latest 2 SQL (schema/data) scripts to restore it to its previous state. Sometimes, when we know where the bottleneck is, we just open up SQL and alter the database manually.

Restoring the files

Since we've got a 'production-mirror' copy of all the files, the files are the least of our problems. Combined with a source control repository we should, in theory, be able to restore the site to any state we ever committed.

If you the file storage provider of Sitefinity 4, daily SQL backups aren't enough anymore and monthly offline mirroring isn't sufficient for this scenario. We haven't got anything in production yet that utilizes this provider, but when we do we'll need to devise a new protocol. This probably will become some sort of server side 'zipping' of the file storage folders on a daily scheduled base.

If someone's wondering why we favor schema/data scripts so much instead of 'old fashion' SQL backups, it’s because it takes me roughly 1 minute to run 3 SQL scripts from within Visual Studio instead of manually open SQL Manager, and try to restore (or create a temp alternative database).

Scott MacDonald Scott MacDonald

LinkedIn
Funi, Sitefinity Certified Partner
review Funi projects with Sitefinity

For production sites I utilize the built-in SQL 2008 R2 Maintenance Plan utility to do a full backup nightly and transaction log backups every 15 minutes. These backup locally to the RAID 1 mirrored c:\ drive in the default path the server the SQL Server was installed in. Each server also has a single slower but larger SATA drive used only for backups, normally a 1 terabyte drive. I use a product called "inSync" from Dillobits that syncs the SQL Server backup folder's daily changed backup files to this backup drive. I also sync all website files and mail directories this way. One weekend a month I manually backup offsite to a server in my office in case I wake up one morning and the Datacenter is a "smoking crater".

It's not optimal I know, but eventually I will be syncing the offsite backups nightly between servers in separate datacenters across their private network. I make very clear though, to all my hosting clients, that backing up the website is their responsibility. We cannot be responsible for lost customer data. All my servers run SwSoft's Plesk control panel and the client has everything they need to back up their sites and databases.

I have a few Sitefinity customers that preferred to host their sites with a shared hosting provider, and I try to let them know upfront that they need to have a plan for backups. Unfortunately, one recently forgot to pay their bill with their hosting provider and found out all their Sitefinity 3.7 files and database were gone for good. Ouch! That’s why it is important to carefully choose your website hosting provider.

Hardy ErlingerHardy Erlinger

LinkedIn
Netspectrum, Sitefinity Certified Partner
review Netspectrum projects with Sitefinity

For production sites we use a custom powershell script that creates a backup of both the database and all the application files, compresses them and copies them to an external server using FTP. So far we didn't have any disasters on production sites but if one should happen we would lose a maximum of one day of data.

Development is an entirely different story, though. Usually we start the project locally (using Perforce as the source code repository) and develop it up to the point where the client will start entering their data. Once the project is deployed to the staging server where the client adds their content, our local installation becomes a development and testing sandbox only, i.e. we develop new modules and features locally using sample data and manually deploy them to the staging server once they have been tested. In order to make deployment as easy as possible, all files of a particular module or widget must be contained in its assembly, i.e. we don't use User Controls that would have to be placed into the "SitefinityWebApp" project. In fact, "SitefinityWebApp" doesn't get changed much anymore once the template and theme files are in place. When the module is ready, we simply copy the assembly to staging/production and update the required configuration.

While I have considered syncing the production databases with our own local copies I have found that not having all the "text-only" pages in our development environment is actually beneficial as the project is free of clutter which in turn allows us to focus on the functionality of the feature or module under development. The local databases are backed up automatically every 30 minutes so if we break anything we lose 30 min of work at the max.

Steve MillerSteve Miller

LinkedIn
Sitefinity Partner Manager
Founder of Mallsoft now part of Telerik

There were two types of recovery we had when we were hosting sites. The first was using SyncToy and doing just what Scott mentioned above. I would back up the transactions logs every 30 minutes and sync them to a 1Tb drive using SyncToy - http://www.microsoft.com/download/en/details.aspx?id=15155. This was also used on the web sites, meaning any data changes such as images or external files would automatically be synced to a 1TB USB 2.0 drive. This type of recovery is good for only a few days of data but easy to restore if someone deleted their files or something.

The second recovery tool we used is Symantec BackupExec with their disaster recovery module. This would backup all sites and DB's every other day, saving them to tape. Iron Mountain would then pick up these tapes at the data center and store them off site. These tapes would be archived for 30 days with the last day of the month being archived for 1 year. This option is more expensive as you need to have 24+ tapes just for yearly archive, and 15+ tapes for monthly but it was well worth it as there are several times I needed these backups well after the data was actually backed up.

The best thing to consider is what if I deleted a file by accident? What if the data center caught on fire and I was not allowed into the data center for 5 days, which happened to me. There was an electrical fire in a data center where water was used to extinguish the fire. Most data centers have this type of extinguishers.

-------

Please share your own practices and approach in the comments below. What do you do to protect your Sitefinity website from unforeseen disasters?

progress-logo

The Progress Team

View all posts from The Progress Team on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation