carbonite logo

Commonly searched topics:

backupcloud backupaccount sign in

Article · Jul 15, 2021

What’s the difference between high availability and backup again?

Ransomware is on the rise all over the world. Determining now which types of data your business simply needs backed up, and which it simply can’t go without could be a matter of survival.

Vs

It’s not just that they’re making headlines more often. Ransomware rates really are rising. Given the recent spate of high-profile attacks, it’s worth remembering the difference between standard backup and high-availability replication.

Our research suggests that the costs of ransomware for businesses can amount to much more than an extortion payment. They include lost hours of productivity, reputational damage, compliance fines and more. But maintaining access to critical data at all times can undermine ransomware actors’ leverage over an organization, reduce recovery time and earn the good graces of regulators and the public.

Ultimately, doing so comes down to answering the question: what data does my business simply need to back up, and what data can my business simply not do without? Knowing the difference helps to determine the Recovery Time Objective (RTO) for a given type of data or application.

A 24-hour recovery time may fall within the RTO for non-essential data and applications. For mission-critical data, on the other hand, a 24-hour recovery period may exceed the acceptable amount of time to be without access to data. It could drive up the cost of data breach significantly, perhaps even higher than a ransomware payment.

Also, it may come down to the amount of change-rate data that can be acceptability lost. Knowing the acceptable Recovery Point Objectives (RPO) can be as important as knowing the required RTOs.  For instance, a highly transactional system performing critical Online Transaction Processing (OLTP) could not afford the loss of data that occurred between backup cycles. 

Well-designed data backup plans tend to be a blend of both standard backup and high availability, so it helps to know the difference when determining which is the better fit for a given system, application or set of data.

Data backup

There are all sorts of good reasons to keep regular, reliable backups of business systems. These may concern the normal conveniences of document retention – not having to begin a project from scratch in the case of accidental deletion, for instance – or to satisfy industry or legal compliance regulations.

These backups are taken at pre-determined time intervals, typically once a day during non-working hours, and stored on a backup server. Often backups will be given an associated value called a retention.  A retention allows organization to keep certain backups for a longer period of time.  For instance, a business may decide it’s necessary to keep daily backups for a total of 30 days. But due to storage concerns, they will drop off the server on day 31. However, regulations or corporate policies may require keeping certain backups longer, so often they will designate a monthly of a yearly backup that has an extended retention for one or even up to seven years. 

Recently, backup servers have been targeted by ransomware actors.  Criminals will study an organization’s environment and specifically backup services. Therefore, it’s extremely important to have a backup for the backup. One of the preferred methods is a secondary cloud copy of the backup server.  Since the cloud copy sits on a separate network, it provides a layer of security making it more difficult to span the separate cloud network and target the secondary backup copy.

In most cases, backups like those discussed above have recovery times of hours for a localized power outage or even days for a flooded server room, for example. For an HR system, this RTO may be acceptable. For a point-of-sale system, this could mean significant lost revenue.

High availability

When a backup’s RTO and RPO time values do not meet the needs for recovering a company’s critical systems (OLTP servers, for instance), high-availability replication is an effective alternative for ensuring required operational performance levels are met. High-availability replication accomplishes this by keeping an exact copy of critical servers, maintained by real-time, byte-level replication, which remain powered off until needed. 

When that time comes, a failover procedure is initiated, and the copy assumes the role of the production system. The failover process typically occurs within a matter of a second or minutes, depending upon the server configuration or network latency. In cases of hardware failure or data center disasters, high-availability replication can stave off a data loss disaster.

However, since replication is real-time, an offline copy can be corrupted if the primary is attacked by ransomware. Therefore, system snapshots may be required to maintain clean point in time copies of the system. Snapshots are typically non-intrusive, do not noticeably delay replication and provide a failover with a better RPO than backup.

Like with backup, an off-site cloud solution can step in if on-site servers are out of commission. Latency can slightly lengthen recovery a small amount as the off-site cloud boots up, but the time to recovery still feels like a blip to users or customers.

For some organizations there may be no data critical enough to warrant implementing this high-availability architecture. For others, all data may be considered essential. For most, the reality will be fall somewhere in the middle. If companies are highly regulated or mandated by specific corporate retention requirements, a combination of high-availability replication and backup will likely exist for the same server.

Ensuring resilience against ransomware

In a blended backup/high-availability strategy, what matters most is deciding which systems are backed up by which before the worst happens. Whether handling backup for your own organization or for clients’, it’s important to have a well-tested backup plan in place that takes in RTOs based on acceptable amounts of downtime for data and applications.

Author

img

Kyle Fiehler

Kyle Fiehler is a writer and brand journalist for Carbonite. For over 5 years he's written and published custom content for the tech, industrial, and service sectors. He now focuses on articulating the Carbonite brand story through collaboration with customers, partners, and internal subject matter experts.

Related content