How to create a server failover solution

An automatic server failover solution can prevent your website from going down in the event of a server failure. Through automatic detection, an error on your primary server can be detected and traffic will automatically be sent to a backup server.


An automatic server failover solution is surprisingly easy to setup. It works by having two servers with identical content on them – a primary server and a secondary server. A third server monitors the primary server and detects if there is a problem. If there is, it will automatically update the DNS records for your website so that traffic will be diverted to your secondary server. Once your primary server is functioning again, traffic will be routed back to your primary server. Most of the time your users won’t even notice a thing!

Automatic Server Failover Solution

Setting up an automatic server failover solution just consists of 5 steps:

  1. Get a secondary server
  2. Synchronize primary and secondary servers
  3. Reveal server status
  4. Set up DNS Failover
  5. Test it!

Step 1: Get a secondary server

The first thing you need is another server to be your secondary/failover server. This doesn’t need to be a dedicated server – if you’re using shared hosting then another shared hosting account will do just fine. However it is important that you choose a server that is physically different from your primary server. I recommend not only using a different hosting company, but also making sure that the servers are in different locations.

What good does a secondary server do if its using the same network as your primary server when it goes down? Or if a server admin makes a goof that’s propagated to all servers they manage? Before I sign up with any new host I’ll research where they’re located to make sure it’s not in the same state as my primary server. You can use the IP Information tool here to check the location of any ip address

IP location lookup

If you don’t know where to find a good secondary server host, let me suggest StableHost. I recently moved over to them after too many problems with HostGator, and I’ve been really impressed. They are the best bang for your buck – great plans with realistic prices (as of now, shared starting at $5/month). Plus they just migrated all their servers to a Provo, Utah location, which I doubt is the location of your primary server. Oh, and here’s a StableHost coupon code for 40% off: expert40

 Step 2: Synchronize primary and secondary servers

Right now you have a blank backup server, but sending people there in the event of an emergency will do no good unless it mirrors your primary site. You could copy everything over right now, but then you’d have to manually update the secondary server every time you change the primary server, which is a hassle. What we need next is a way to automatically keep your primary and secondary servers in sync with each other.

Syncing website files

There are two common approaches I typically use to keep my website files in sync: Rsync and Source Code Repositories.

Rsync is a unix utility that synchronizes files and directories from one location to another. You run a simple command similar to “rsync server1:/myfiles server2:mybackupfiles” and it will copy everything inside ‘myfiles’ from server1 into the ‘mybackupfiles’ folder on server2. It also saves time and bandwidth by only copying over what is different, pretty cool!

You’ll run a command similar to this on your secondary server. Be sure to use the IP address of your primary server instead of the host name since the host name could refer to the secondary server when failover has kicked in (that’ll make sense later). I’d suggest mirroring everything in the public html directory.

rsync -avz primaryuser@ /home/secondarywebsite/public_html

In order for this to work, you need to set up ssh keys so that the secondary server is allowed to access the primary server (security!). This tutorial can help you set that up: Setting up ssh keys for rsync. It looks daunting, but really isn’t that bad. In that article, ‘thishost’ is the secondary server, and ‘remotehost’ is the primary. You can skip the ‘validate-rsync’ part if you wish.

If you can run that rsync command successfully via command line, it’s time to automate it so that it runs without you having to run it. I configured mine to run every night at midnight, but you can choose to make yours run more or less often, depending on how often your server files change. We’ll be automating this with a cronjob. Type ‘crontab -e’, and then add this line, where ‘rsync…’ is your rsync line:

0 0 * * * rsyc...

An alternative method is to use a source code management system and pull from it regularly. For example, if you have a git repository set up that contains all of your website files, simply running a ‘git pull’ instead of that rsync command will accomplish the same by pulling the latest files from the repo. Here’s a tutorial for Setting up git on a server.

Syncing databases

If your website replies upon a database, such as wordpress does, then you’ll also need to copy over the database, which is separate from the file copy you’ve just set up. Technically you could try to just copy over the database files, but I wouldn’t recommend that – it could create inconsistent data in the event of an error. I’ll show you how to copy over the contents directly from within your database program.

We’ll use MySQL for this example, but all database systems should have similar commands. You’ll first need to have a blank database set up on the secondary server with the same name, user, and password as the primary server’s database. Then we’ll utilize the mysqldump utility to spit out the contents of the primary server’s database and feed it directly to the copy. This command should be run from the secondary server:

mysqldump --host= --user=MYDBUSER -pMYDBPASSWORD --add-drop-table --no-create-db --skip-lock-tables MYDBNAME | mysql --user=MYDBUSER -pMYDBPASSWORD MYDBNAME

Be sure to replace the values above with your actual primary server ip address and user/passwords. Once you’ve verified that this command works, simply run it as a cronjob as we have done with rsync or git. Run ‘crontab -e’, then add your mysqldump command:

0 0 * * * mysqldump...

Now your database will be mirrored onto your secondary server every night! This solution works well for websites that don’t absolutely need their secondary server to have an exact up-to-date copy of their primary data. Understand that if you add a new wordpress article during the day and then your server fails over to the secondary, the secondary won’t have that new article in its database until the database sync is run again at night. If you need a real-time copy of your data, you can look into setting up a master-master replication solution, or come up with a manual method of copying over a subset of the data as it is changed given you know how your website works.

Most web applications depend on your website files and database to run smoothly, which we have taken care of. If your server depends on other functions, such as mail, you’ll have to come up with a plan to keep those systems in sync as well. Unfortunately that is beyond the scope of this tutorial.

 Step 3: Reveal server status

The tool we’re going to use to automatically switch your servers upon failure needs to know when your server is failing. It does this by checking your server every few minutes for a specific response. You can have it check just your homepage to see if its serving content, but I like to get a little more detailed. For example, your website may still return a page if your database is down, which would be a false indicator that everything’s okay when it’s not.

I recommend creating a server status page that reveals if ALL services are functioning properly. Make a simple script that connects to the database and returns ‘SUCCESS’ only if the connection was successful. This ensures that HTTP and database services must be functioning properly to report a successful server status. If there are other services your site depends on, include a check for those too. Here’s a sample PHP script I use to report that my server is healthy:

$link = mysqli_connect("localhost", "my_user", "my_password", "my_db");

// check connection
if (mysqli_connect_errno()) {
    printf("Connect failed: %s\n", mysqli_connect_error());

//perform simple query
if ($result = mysqli_query($link, "SELECT 1")) {
    if(mysqli_num_rows($result)) echo "SUCCESS";


After that file is set up, you should be able to go to that page in your browser and see SUCCESS!

Server status page

Step 4: Set up DNS Failover

We’ll be using a service called DNS Made Easy to provide DNS Failover. This automatically checks your server status script every few minutes for ‘SUCCESS’. In the event your server is unreachable, or there’s some other error and it doesn’t see ‘SUCCESS’, DNS Made Easy will change your DNS entries to point to your backup server. As soon as your primary server is reachable again DNS Made Easy will revert the changes back to your primary server . Since your visitors are accessing your site via your domain name, no one will ever know what just happened as they’ll have been pointed to a functioning server the whole time!

You’ll need to purchase a DNS Made Easy account and then point your DNS their name servers so they are handling your domain name. Their Business level account ($60/year) includes up to 3 hostnames to monitor (you’ll need 2: one for, and one for Note that this price is per year, not per month – pretty cheap considering the peace of mind it provides!

After you create your account and set up your basic name server settings, you’ll need to configure the automatic failover. In the A-records table of hosts, click the Off in the SM/FO column and input your values:

DNS Failover Settings

Use these settings:

  • Sensitivity: High so that your server will be checked more often and routed quicker in the event of an outage.
  • File To Query: this is the path to your server status script that should return ‘SUCCESS’. Use a relative path (‘/check/status.php’) that doesn’t include your domain or IP address.
  • String To Query For: SUCCESS. Or if you used a different message in your server status script.
  • IP Address 1: This is the IP of your primary server
  • IP Address 2: This is the IP of your secondary server
  • Monitoring Notifications and DNS Failover: make sure these are checked

Click OK to save – it should now read ‘On’ in the SM/FO column next to that host. Then do the same for the hostname with no name next to it (the blank one is for people that access your site without the www in front).

One last thing you’ll need to do is set the TTL to 300 seconds. TTL is the Time To Live, and it means how long DNS entries should be cached for. The lower the value the quicker DNS will be updated for those that have already been to your site. It can be set as low as 60 seconds, but 300 is more likely to be honored by big ISPs who would otherwise use a much larger default value – plus it lowers our queries per month. Set this with the Edit icon while your A-record is checked in the table.

Step 5: Test it!

Make sure you wait at least 2 days before testing this – it will take 24-28 hours for your initial DNS transfer to DNS Made Easy to propagate through the internet. To test, simply rename the server status script so that the monitoring cannot find it. Within 2-4 minutes, you’ll receive an email alerting you that your site is down (you’ll actually receive 2, one for each host). Once this happens, ping your website – it’ll show you the IP address mapped to your domain name.

Run ping to see your IP address

You should see your secondary server’s IP address. If you don’t, it’s because you recently visited your site and the entry was cached. You’ll just have to wait a maximum of 300 seconds (that TTL setting) for it to expire and your computer to make a new lookup. If your ping result doesn’t show a different IP address within 5 minutes, check over all the steps to make sure it’s been set up properly


  • In DNS Made Easy‘s settings, you’ll notice that there are 5 IP addresses listed. This allows you to string up to 4 backup servers! It will keep checking down the chain of servers until it finds one that works, just in case your first secondary server isn’t working either.
  • If your primary server is down for a long time, you might want to disable your automatic server syncing so that your active secondary server doesn’t get interrupted while handling live traffic
  • You can utilize this approach to help with major server maintenance. Simply fake an outage by renaming the server status script. Traffic will be diverted to your secondary server and you can perform major updates on your primary server. When you’re done, revert the server status!


You’ve successfully created an alternative server, synchronized it to be an exact copy of your primary, and set up failover so in the event of a disaster it will automatically kick in to serve your traffic. Next time your primary server fails you won’t have to do a thing – and you didn’t have to install any failover software! You can rest assured knowing your visitors are able to access your website at all times. Congratulations!

Resources Used

Posted in Web Servers
68 comments on “How to create a server failover solution
  1. chris says:

    Thank you.

  2. chris says:

    What about the email servers? When the the website is switched to the new host ip the email will be rooted to the new host…right?What to do in order to have an email backup also?

  3. wilson says:

    amazing your help, thanks so much i really save lot of time with your steps.

    One question, do you know some free services like dns made easy?, just for test purposes.

  4. Benson Trent says:

    Thanks for the guide. Very helpful. I was stumped at the Key/Pair Setup, but this simpler guide from Pai H. Chou helped me out: ; then I added the security recommendations from the guide you recommended.

  5. someguy says:

    if your datacenter fails the second after your script gets its latest SUCCESS flag and heads out for a 1 minute smoke break before checking again, that’s enough time to flatline the entire farm. it’s also enough to affect your guaranteed uptime averages, depending on the size of your network.

    • You are correct – this is not going to guarantee 99.999% uptime, but should be enough to keep you online within minutes of your primary host failing. Earlier this month I had a server go down (with Hostgator, ugh) and it took 2 days to get back up. Luckily with this solution we were only down a few minutes.

  6. BaxVai says:

    If primary server is down and switch to the secondary one, what i want to do is when the primary is up again, can it go back from the secondary to the primary? thanks a bunch…

    • Yes! DnsMadeEasy will keep checking the primary to see if it’s up, and if so will automatically switch everything back to normal once that happens. If you wish, they also have a setting to stay on backup once the switch has been made.

      • Sanjay Kumar says:

        When Primary is down it will point to Secondary. But when primary is up it will switch to primary again. Its fine, But in case of database Secondary server has the latest database. What about the database sync from secondary to primary.

  7. Tommy says:

    So I’m looking into using the server failover solution you’ve described but have one question. My primary server is dedicated but I would like to use a shared server for the secondary server. From what I’ve seen the shared host will put a prefix on the MySQL database names. How do you address the fact that it may not be possible to have the same database names? I’ve seen suggestions to create a link with the required name that points to the database file in /var/lib/mysql. I can do that on the dedicated server but not on the account I have on a shared server.

    • Hey Tommy – Great question. Is your site custom coded? If so, and you have access to the code that connects to the database, I’d suggest the following: Make a function used to query the database, but have that function check which server it is on to determine what database name to use. Odds are the path of the file will be different on the servers, so you can check for a specific unix user name (home directory) being used. Or you can make a config file that’s not part of the replicated code and make that different for each server. Hope that helps!

  8. manoj prashant says:

    thanku very much for this info
    but when a primary server is congested and has many requests can it pass the services to secondary server???

  9. sanjeev says:

    Possibly silly question, but how do you hedge your monitoring server going down? Another level of monitoring?

    • Not silly, great question! I rely on DnsMadeEasy’s reliability, which has a proven track record. Also they check your site from multiple servers all over the world, and only kick in the failover if it fails to respond from at least 2 different geographic locations. So yes, you’re right that they could go down, but the chances of your servers going down AND their entire network of servers going down at the same time are extremely low.

  10. Visakh says:

    Thanks for your info.

    What is the minimum and maximum time threshold to switch DNS to the slave server, once master detected problem also, how do we dual-directed sync the configurations and files between the master and slave servers?

  11. Koen says:

    Great article! I’m currently looking for a good way to create a failover system for our webapplication and we also make use of cronjobs. What’s the best way to determine whether the secondary environment is currently active? (To prevent the jobs from sending mails for example)

    > Curl in the script?
    > Ping
    > …

    Thanks in advance,


  12. David Reid says:

    Thank you so much for this article. It’s a life saver.

    One quick question: If both my primary site and my secondary site go down, is it possible to route traffic to a page that just has a “site down be back soon” message? (Something that I could put on a Google site, for instance.)

    Thanks again!


  13. Anthony says:

    Very nice article, thank you!
    But what about dns caching? some users browsers or OS-es will cache dns results locally, and they will go to broken server, isn’t it?

    • Hi Anthony – Yes, that’s a great point. I’ve referenced it above: “One last thing you’ll need to do is set the TTL to 300 seconds. TTL is the Time To Live, and it means how long DNS entries should be cached for. The lower the value the quicker DNS will be updated for those that have already been to your site. It can be set as low as 60 seconds, but 300 is more likely to be honored by big ISPs who would otherwise use a much larger default value – plus it lowers our queries per month. “

  14. eric says:

    really nice tutorial…is there an alternative for dns made easy ?

  15. Don't want to show yet says:

    Hi Shane,

    Do you do contact work to help people create server failover solutions? If so what is the best way to contact you?

  16. Altaf says:

    Do you offer a paid service. Its too technical for me. I have two dedicated server, one in Utah USA a dedicated machine and one in my office in UAE on VM, please let me know if you could set it up for me for a fee.

  17. Ayush says:

    Awesome thanks for the great article. now i am going to try this out.

  18. Ganesh R says:

    I have a problem with the mysql sync command: “mysqldump –host= –user=MYDBUSER -pMYDBPASSWORD
    –add-drop-table –no-create-db –skip-lock-tables MYDBNAME | mysql –user=MYDBUSER -pMYDBPASSWORD MYDBNAME”. As I am having two cpanel accounts for primary and secondary servers, it’s not possible to have same database table name in both the servers. Kindly let me know what should I do in the case dbnames are different in both the servers!

    And I have one more question, where will this command dump the mysql db in the primary server? I have tried searching the whole cpanel directories in the primary server for the mysql dump file, but havent found it. Is it getting directly exported into the secondary server mysql db or getting dumped first in the primary and then synced along with the other files?

    Thanks in advance!

    • Hi Ganesh – If the database names are different that’s okay, just specify different MYDBNAMEs in the command. But if the table names are different then you will have to list out each table name after MYDBNAME in the mysqldump command. See here for examples:
      The command does not make a physical file on disk, it simply streams it from one server to the other so that there’s no cleanup! Hope that helps!

  19. TheProCoder says:

    How about sending Primary and Secondary severs IP addresses at once? So visitor can connect to either one? Without a problem if any one goes down? Is it better? Can this be used to achieve a 100% uptime?

  20. Joel D says:

    Thanks for the great guide. I’ve the file and database replication happening and am getting ready to set up the EasyDNS step. For the secondary backup server, do I need to have a dedicated static IP address or do IP addresses stay the same?

  21. Lance says:

    I need a contractor to set this up for me. Anyone interested?

  22. Adam B says:

    Hi Shane, great tutorial. I’m currently setting this up for a WordPress e-commerce website. Quick question, if the primary server goes down, and it switches to the secondary server, and someone makes a change on the secondary site (e,g, makes an order etc, edits a page) will this change be reflected on the primary server when it comes back online?


  23. Jenny says:

    Is there a way to make a load balancer and backup server at the same time? So you still have 2 servers, complete replica’s of each other, and both are used or all traffic is diverted to one if the other is down? I have a domain that serves content to several news stations across the U.S. most of which are weather graphics and some very large. I’ve been using CloudFlare to help with the load, and have not had any slowness issues since turning that one. As of recently I am getting continuous 522 errors with CloudFlare so I am having to pause the service in order for my server to respond correctly. I am looking for alternatives and any advice would be greatly appreciated.

  24. Joni says:

    We use a vhosts file on our primary server. Does the vhosts.cnf need to be copied over or created on the secondary server also? Our primary server is a turnkey debian for wordpress, and our secondary server is CentOS, so the file structure is a little different, but I’m assuming that if everything is where is is supposed to be, it will work. We’re looking forward to testing this soon. Thank you for the tutorial.

  25. Younus says:

    Hi Shane

    Thank you for the great post.

    I have read that if the users have already opened the site on the primary IP they need to flush their DNS entry to check the other IP. How true is this>?

    I have a lot of visitors on my site and if DNS failover does not work as expected it can cause a havoc.


    • This is true, which is why setting the TTL on your DNS is important. If you set it to a low amount, like a couple minutes, then it will expire and it’ll force to get the new IP, which is your backup.

  26. Jack says:

    Is it possible to have the secondary backup server acting as the failover monitor as well?

    As in, could you set it up so that only 2 servers are involved?

  27. Dhinesh says:

    I see DNS made easy basic plan($29.95) has an addon feature, Buy Failover for 4.95 for one IP. I am planning to do that , it will save some cost rather buying Business plan. Do i need to monitor both the IPs or monitoring primary will do ?

    • Hey Dhinesh – Great point, this add on is a new option and is much more cost effective for basic sites. You’ll just need one monitor for your primary IP. Hope that helps!

  28. Srini says:

    Dear Shane,

    Thanks for your answer’s. I am new to this windows IIS. I have 2 quick questions.

    1) I need to host 2 website with one public ip address (primary internet) (website1 @, website2 @, But my firewall (sonicwall NSA2400) only have the option to route the port 80 and 443 to one single private ip. website name are registerd with website1 was in godaddy, Website2 with 1and1. I like to use SSL wild card certificate which was purchase from godaddy. My server’s are running Windows 2008 R2 with IIS7. If DNS made easy help for this? or what technology will will to route one public ip to host two different website.

    2)If we find the solution, we will route the secondary internet for failover.


  29. Ram J says:

    First of all this is a post with good explanation. I have a couple of questions regarding sync between servers. At the worst case consistent data may not be available on a particular server at that point of time when sync hasn’t happened yet, right?. So I was wondering is it possible to feed data from the firewall itself to multiple systems in parallel so that there wont be any need to sync between servers? Kindly educate me if I am wrong. Thanks

  30. Greg says:

    Thx Shane! I want to backup my shared website hosting server to a backup server. Will this work for all of the websites?

    Also, do you do contract work?


  31. ruel says:

    Hi, this is exactly I need, my database is not running on a webbase, its only a local database server running on my LAN network, Is this the same process I need to setup on my servers? Thanks you

  32. I have 2 servers set up. I have set up sync with two servers.(each server have website filse and database) accoring to your instruction. But the website is a intranet. So if one server is down how can i point my intranet to secondary server. Please advise.

  33. jafar says:

    I knew StableHost by reading this Great article and became your referral at 31st March 2015.
    I have websites for video collection as:
    These websites publish new posts after every few minutes.
    Can i rely your greatly explained method OR it will be complicated for my websites?
    I am a little bit afraid, please suggest before i dip in all this work.

  34. Henrich says:

    Possibly it is a stupid question. About DNS FAILOVER.

    In the registar of IP1, should I change the NS2 for the name server (

    And the IP2 must also have the name server changed? Or is unnecessary, just pointing to the desired IP2?

  35. Liji.G says:

    Great Article !

    We have few queries can you please guide us

    1) Do have any tutorial on how to write a script to copy the data back once the primary server is up.
    2) Also we need to know whether both the Servers Primary & Secondary needs to be of similar configuration & same brand ??

    • Hi Liji –

      1) There’s no universal script to copy data back, as it depends on your application, what data is necessary vs unnecessary, how often you do the sync, etc. Sometimes if your data rarely changes you don’t need this. I’d suggest having a developer take care of this.
      2) Both servers don’t need to be exactly the same, they just both need to be able to run your website. It’s completely acceptable to have your backup server smaller in size and speed.

  36. jafar says:

    Command to run if remote/primaryserver is not using standard port for ssh
    rsync -avz -e “ssh -p PortNumber” primaryuser@ /home/secondarywebsite/public_html

  37. Ingo Steinbach says:

    Great article!
    Question: Does this failover also work with ssl connections? You can’t just swop the server without the ssl going ballistic?
    Thanks, Ingo

Leave a Reply

Your email address will not be published. Required fields are marked *