How Ransomware Changes Backup and Disaster Recovery
Ransomware can corrupt an entire network's worth of data very quickly. Are you ready?
In the olden days of IT, a decent data protection scheme involved taking copies of all your data at relevant intervals and placing that data on a local backup server. That server would then either write the data to tape (to be couriered offsite) or send it out across the Internet to a secondary site or service provider.
When Everything Changed
The local copy of your data means quick recovery in the event of an issue which doesn't involve a site outage. An accidentally deleted file doesn't mean dragging information back across the Internet, and you get the added bonus of being able to clone out copies of your data for test and development purposes, without impacting production workloads.
Ransomware changed everything.
Many of today's ransomware packages not only corrupt data on a single computer or server, they corrupt data on backup servers as well. This could be because the backup servers have their shares available; because the ransomware jumped from the primary point of infection over to infecting the backup server; or because the ransomware is exploiting a vulnerability in the operating system or data protection software to corrupt backups directly.
The Dominoes Fall
Top-end ransomware can corrupt an entire network's worth of data very quickly. It can spread from system to system, with each infected system increasing both the speed of infection of new systems and the number of files per second corrupted.
Data protection setups designed to handle a given amount of data changing per day (known as the rate of data churn) can simply be overwhelmed. Continuous Data Protection (CDP)-based data protection systems can fail outright, while non-CDP systems take longer to transfer the daily snapshots than the length of a day.
Other versions of ransomware can be subtle, making small changes over time, sometimes repeatedly changing files over the course of months before alerting anyone that they've been infected. In this scenario, the goal of the ransomware creators is to go undetected long enough that uncorrupted copies of the data no longer exist, making data protection irrelevant and forcing the mark to pay the ransom.
Credential reuse enables ransomware to get out of hand. Typically, only a handful of top-level administrative credentials exist within an organization. This is done to reduce the number of accounts that open to compromise. The result, however, is that the number of administration planes within an organization is small.
There is a good chance that the virtualization administrator has access to the backup server; or, if there are two accounts, that the same password is used for both. Similarly, if a company manages its own data protection in its entirety, there is also a good chance that the backup server on the primary site uses the same credentials as its counterpart on any secondary sites.
Similarly, the backup server is likely part of the same authentication structure as the rest of the network. If high-level administrative credentials on the production network are compromised, chances are these will allow the backup server to be corrupted as well.
Top-end malware can even look at the backup server configuration, identify where it might be sending disaster recovery copies, then go infect those servers, effectively wiping out the entire organization by eliminating all of its data. (Protip: do not browse the Web as the domain administrator. Ever. For any reason.)
This issue isn't restricted to those who would run their workloads on their own premises. Consider public cloud computing: how are you protecting that data?
Many administrators merely use the facilities of the public cloud provider to make snapshots and replicate between regions. Ask yourself how much damage a malicious actor could do if they discovered your credentials. In most cases they could delete your primary data, as well as all of your backups in seconds using a script.
Indeed, it's easy to argue public cloud solutions are in some ways more vulnerable to these sorts of attacks. This is in part because few on-premises setups are joined up enough that a single credential compromise can wipe out everything, and in part because time has shown too many people treat data protection in the public cloud as something that they don't have to worry about. That is to say, they confuse the workload protection provided by public cloud providers for data protection.
How Many Places Does Your Data Exist?
The proper care and feeding of public cloud-hosted workloads is no different than that of on-premises workloads: if your data doesn't exist in two places, then it simply doesn't exist! This means making a copy of your data and either copying it to another account with totally separate credentials on the same public cloud provider, or (preferably) copying the data somewhere else.
This data can be copied to a completely separate cloud provider, a traditional hosting provider or to your own premises. Whether you're using a self-service public cloud, a traditional hosted service provider or you roll your own, you can't escape the need to care about who and what can access your data; when; where; or for how long. Nor how many copies of your data need to be made and with which RPOs and RTOs.
Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.