Backing up and archiving data from Turbo NAS to various cloud storage
QNAP offers easy-to-use, feature-rich, and secure cloud backup solutions with a series of NAS Apps that protect your data by storing a copy of your data in the cloud storage or archive your aged data to the cloud storage for long-term preservation. A variety of enterprise-class public cloud storage services are offered, including Microsoft Azure, Amazon S3, Amazon Glacier, Google Cloud Storage, Rackspace, IBM SoftLayer, HP Helion Cloud, and more. You can also use private cloud storage solutions that are compatible with S3, OpenStack Swift, and WebDAV
For performance and security control, using local network storage is the best choice. But for data that is infrequently accessed (cold data), offloading them to public cloud storage can largely reduce your storage management efforts and minimize costs due to their pay-as-you-go pricing models.
QNAP CloudBackup Apps provide excellent cloud storage gateway solutions to utilize cloud storage. You now have a simple and affordable disaster recovery plan, as well as a worry-free data archival solution for preserving your digital asset or compliance.
- Supports manual and scheduled backup or restore with flexible options
- Supports full and incremental backup
- Supports flexible file filtering using type, date, and size
- Supports SSL transfer encryption and client-side AES-256 encryption
- Supports file compression for cost saving and efficient data transfer
- Supports reserving deleted files in the cloud and configurable retention periods
- Supports preserving advanced ACLs and extended attributes
- Supports detecting sparse files
- Supports cloud archival mode
- Supports concurrent job execution and fast multi-threaded data transfer*
- Supports per-account bandwidth throttling with configurable durations
- Supports restoring directly from the cloud for disaster recovery scenario
- Supports configurable network error recovery options
- Supports managing multiple cloud accounts
- Supports dashboard for job status and statistics
- Supports job activity logging for tracing history of status and statistics
- Supports flexible selection for region, bucket, and folder as backup destinations
- Supports large file backup by S3 multi-part upload and OpenStack large objects
- Supports AWS S3 Server Side Encryption and Reduced Redundancy
- Supports special cloud regions: AWS China, AWS GovCloud, and Azure China
Supports using S3 and OpenStack-compatible services as backup destination
*Please note: The WebDAV CloudBackup App does not support multi-threaded data transfer.
How to start
You can conduct the following steps to start backing up your data to cloud storage:
- Choose and install the CloudBackup App for your cloud storage from the QTS App Center.
- Create an account in the CloudBackup App using your credentials of the cloud storage.
- Create backup jobs in the CloudBackup App with filters and policies.
- Create restore jobs in the CloudBackup App either from backup jobs or from the cloud.
- View the dashboard or the job list in the CloudBackup App to know the job’s status.
To install the CloudBackup App on your Turbo NAS, sign into the QTS web interface as an administrator. Then go to “App Center”, find the CloudBackup App and click “Add to QTS”. After downloading and installing the App, you will find its icon on your QTS desktop and Quick Start menu. Click the icon to start using the App.
QNAP provides one CloudBackup App for one specific cloud storage technology. You can choose to install one or more CloudBackup Apps for the cloud storage you want to use. Every App is optimized according to the cloud storage technology to maximize its capabilities and can be updated independently.
Cloud storage is offered in a variety of technologies and implemented in different ways. There are basically three types: file storage, object storage, and archival storage.
File storage is similar to regular file system in your computer, so you can manage your data in the cloud storage in a similar way. File-based cloud storage services tend to offer features not only data storage but also applications, like online editing and collaboration. Most of them are marketed as drives, like Google Drive, Amazon Drive, or Microsoft One Drive, and they tend to support only proprietary APIs. WebDAV is the most popular protocol supported by file-based cloud storage.
Object storage is designed for massive scalability and high availability. Every object is independent from other objects and there is no tree-like folder structure. You may think of objects as a huge number of files in single folder and there is no subfolder. Each object is identified using a key (a text string as a unique identifier), and people tend to use path-like keys to manage objects. For example, you may have two objects “/abc/object1” and “/abc/object2”, such that object storage client applications can present the two objects, “object1” and “object 2” as two files in a folder, “/abc/”. In fact, there is nothing named as “/abc/” in the object storage.
Archival storage is an emerging cloud storage service. It is similar to object storage, but it is designed for storing infrequently accessed (cold) data and for lower costs, so it takes longer to retrieve an archived object, together with some extra limitations.
To overcome the varieties in cloud storage technologies and to provide consistent user experiences, CloudBackup Apps transform your files and folders into various cloud storage technologies and manage your data in the cloud storage in a way that takes the most features of the cloud storage technologies.
The following sections describe the characteristics of each cloud storage. However, as cloud storage technologies quickly evolve the information provided here is only for your reference and may be outdated. Please consult your cloud storage vendor for detailed information.
Azure is the public cloud service provided by Microsoft. QNAP offers an Azure Storage App for using Azure Blob Storage, which is the object storage service of Azure for storing and retrieving large amounts of unstructured data. There are two types of Azure Blob Storage: Page Blob and Block Blob. The App uses Block Blob Storage, and each of your files is stored as a single object in Azure.
If your company has a contract with Microsoft, you can leverage your current contract to use Azure services. You can also get better performance if you have a better Internet connection with the nearest Azure datacenter. The CloudBackup App supports Azure standard regions as well as Azure China. Please note that the Azure Storage App is not related to Azure Backup service. For more information, please visit http://azure.microsoft.com/.
Amazon Web Service (AWS) Simple Storage Service (S3) is a secure, durable, and highly-scalable object storage service. Amazon S3 is one of the cloud storage with the largest scalability in the market. QNAP offers a S3 Plus CloudBackup App for accessing S3 service. It supports all AWS standard regions around the world as well as special regions like China (https://www.amazonaws.cn/) and US Government Cloud (GovCloud: http://aws.amazon.com/govcloud-us/). For more information, please visit http://aws.amazon.com/s3/.
There are many other Internet service providers and IaaS providers that offer S3-compatible services, including DreamHost, SoftBank, SFR, and hicloud S3. There are also enterprise storage vendors that enable you to build a private S3-compatible cloud storage, including the QNAP Object Storage Server. The S3 Plus App allows you to store your data in any S3-compatible cloud storage.
Amazon Glacier is an extremely low-cost cloud archive storage service that provides secure and durable storage for data archiving and online backup. Glacier costs less than regular object storage, and its data durability matches object storage, or even higher. However, retrieving data from object storage cannot be started immediately. It can take 4 or 5 hours before you can start downloading data from Glacier. For more information, please visit http://aws.amazon.com/glacier/.
QNAP offers the Glacier CloudBackup App for using Amazon Glacier. It supports all AWS standard regions around the world including China. The Glacier App stores each of your files as an archive and it will submit data retrieval jobs when you want to restore your data. It is recommended to use Glacier to only store infrequently-accessed data.
Google Cloud Storage
Google Cloud Storage is the object storage service of the Google Cloud Platform. It allows worldwide storing and retrieval of any amount of data in a secure and cost-effective manner. Your data is protected through redundant storage at multiple physical locations with Google's own reliable and fast networking infrastructure. For more information, please visit https://cloud.google.com/storage/.
OpenStack is the leading open-source and vendor-neutral cloud computing solution. It has been adopted by many public cloud service providers, including Rackspace, IBM SoftLayer, and HP Helion Cloud. You can also install your own OpenStack as a private cloud system, with support from many leading IT vendors, like IBM, HP, and Red Hat. QNAP also offers the Object Storage Server for QTS to provide OpenStack object storage. QNAP offers the OpenStack Swift CloudBackup App for accessing the object storage service in an OpenStack system. For more information about OpenStack, please visit http://www.openstack.org/.
Web Distributed Authoring and Versioning (WebDAV) is an extension of the Hypertext Transfer Protocol (HTTP) that facilitates collaboration between users in editing and managing documents and files stored on web servers. WebDAV is one of the most popular protocols used for accessing files over the Internet, because of its HTTP-based design. For more information about WebDAV, please visit http://www.webdav.org/.
WebDAV is supported by commercial and open-source web server software. Many online storage providers also offer WebDAV accessibility in addition to their proprietary protocols. QNAP offers the WebDAV CloudBackup App for accessing WebDAV-compatible cloud storage. Because there are many online storage services that provide free space, you can use the WebDAV App to back up your data to multiple cloud storage services in order to take advantage of as much free space as possible. There are also cloud services that work as a protocol translator and allow you to use WebDAV to access a variety of cloud storage services so that you can access all of your cloud storage from one place.
CloudBackup Apps allow you to configure multiple accounts, each of which represents the cloud storage service and the credentials needed to access the cloud storage. When you create a backup or restore job, you can specify the account to be used for the job, instead of entering the account information for each job. You can change the account information in one place to make it effective for all related jobs. This simplifies your configuration process.
In most cases, you will need to have only one account for every job. However, you can use multiple accounts for better security and for separating accounts. If you want to store your data to different locations for risk mitigation or cost reduction, you can also have one account for each location. As the rate limit control is on a per-account basis, you can have different rate limit settings for the same cloud storage by applying different accounts for jobs.
You can create, edit, and delete accounts and configure the rate limit settings for each account by using the [Action] buttons. If you modify the cloud storage account identity, like changing its username or access key, the state of the corresponding jobs will be reset and all files will need to be uploaded again as the backup destination has also changed.
The settings for an account depend on the cloud storage in use. The follow sections provide the information for each type of cloud storage technology.
The Azure Storage App allows you to use Azure Storage service from Azure global sites and the China site. As the two sites are managed separately, you cannot use the credentials of one site to access the cloud services of another site. To access Azure Storage, you will need to have a storage account and the access key from your cloud management console and enter the information when creating an account in the App.
The region of a storage account in Azure is determined when you create it. You can also configure data protection levels for your Azure storage account in the Azure Storage management console.
The S3 Plus App allows you to use the following S3 services:
- AWS global service: regular AWS service account
- AWS GovCloud service: AWS datacenter for US governments
- AWS China service: AWS datacenter for users in China
- S3-compatible service: public or private cloud storage that offers S3-compatible API
- QNAP OSS service: a Turbo NAS with the Object Storage Server App installed
To access S3 cloud storage, you will need an access key and secret key from your cloud management console. Enter the key information when creating an account in the App. To use S3-compatible storage, you need also have the IP address or the domain name of the storage as well as the port number if the standard HTTP or HTTPS port is not used.
Please note that the AWS sites are managed separately. You cannot use the credentials of one site to access the cloud services of another site. For example, you cannot use the credentials of AWS global service site to access AWS China services. We also recommend that you use access keys of AWS IAM users instead of AWS root account access keys.
The Glacier App allows you to use the Glacier service from the AWS global and China sites. As the two sites are managed separately, you cannot use the credentials of one site to access the other. To access Glacier cloud storage, you will need an access key and secret key from your cloud management console. Enter the key information when creating an account in the App. We also recommend that you use access keys of AWS IAM users instead of AWS root account access keys.
Google Cloud Storage
To access Google Cloud Storage, you will need a service account email and P12 key file from your cloud management console. Enter the email and upload the P12 key file when creating an account.
The OpenStack Swift App allows you to use OpenStack object storage protocols to the following services:
- IBM SoftLayer (https://www.softlayer.com/)
- RackSpace (including the US and UK sites) (https://www.rackspace.com/)
- HP Helion Public Cloud (http://www.hpcloud.com/)
- OpenStack-compatible service: public or private cloud storage that offers OpenStack -compatible API
- QNAP OSS service: a Turbo NAS with the Object Storage Server App installed
Because OpenStack-based services/software provides various account management and access control, please visit your cloud service vendor’s web site or consult your vendors for more information. To use OpenStack-compatible storage, you need also have the authentication service URL and specify how large objects in the cloud storage can be accessed: either using Dynamic Large Object (DLO) or Static Large Object (SLO).
The WebDAV App allows you to use WebDAV to access various cloud storage services. Although WebDAV is a standard protocol, cloud storage service providers may offer different authentication parameters. Please consult your service providers for more information.
The data backup is performed by backup jobs in CloudBackup Apps. When started, a backup job will scan the specified folders with user-configurable filtering policies to determine which files and folders need to be backed up. It will then start to upload files to the cloud storage and create folders if needed. It will also store the information (including access permissions, modification time, and extended attributes) associated with the backed-up files and folders in a metadata database. After all files have been uploaded, the App will pack the metadata database into a file and upload it to the cloud.
Viewing backup jobs
You can create multiple backup jobs with different configurations and execution timing settings in the CloudBackup App. These backup jobs are presented in a list with the status and basic information. You can press the [Action] buttons to manually start or stop a job, to edit job settings, view job status, and to delete a job.
Creating a backup job
The CloudBackup App helps you to create a backup job using a wizard with the following steps:
Select the source folder
You can browse shared folders and choose their subfolders for backup. If you want to only backup the files of the first level in a folder, you can check “All files in a folder”. Hidden files and folders are displayed here, but you can use filtering options to skip backing them up.
Configuring the schedule of the backup job
There are four scheduling options for a backup job: (1) manual start (2) periodically (3) start once at a specific point of time (4) start after another backup job.
For periodically, you can choose to run the job one time or up to four times in a day, a week, or a month, and you can specify the job’s starting time during the day. If a job is not finished and its next scheduled run has been started, the currently-running job will be continued while the scheduled run will be aborted.
As running multiple jobs at the same time can consume excessive system resources, you can use the “after this job finishes” option to run your jobs sequentially instead of simultaneously.
Configuring the backup policy
You can configure the following backup policies:
(1) Only backup updated files: when enabled, the backup job only uploads new and modified files to the cloud storage. It reduces the data transfer size and helps to finish the backup task faster. If you want to force the job to upload all of the files to the cloud storage, please disable this option. If you choose to use cloud storage of lower reliability for cost saving, you can turn off this option to ensure all of your backup data is refreshed each time the backup job runs. Please also note that if you rename a file/folder, the backup job will consider it a new file/folder and upload the data again. The previously backed-up file/folder will be marked as deleted. As most cloud storage offers free data transfer for upload and the deleted data can be cleaned up later, the behavior mentioned above causes no severe impact except leading to longer backup times.
(2) Clean up deleted data on the cloud: when enabled, if data in the Turbo NAS is deleted, its corresponding backup data on the cloud storage will also be deleted when the backup job is executed. If you want to keep the backup data after its local data is deleted, please specify the number of days to retain the deleted files.
(3) Preserve ACL and extended attributes: when enabled, the ACL (including the advanced access control set by Windows clients) and extended attributes of data will be stored in the metadata database, which will be uploaded to the cloud storage by the backup job. When you restore the data, the corresponding ACL and extended attributes will be applied to the restored data. However, because data access permissions are preserved using user ID and group ID (not the names) please ensure the backed-up data uses the same user ID and group ID as the Turbo NAS to restore the data in order for the access right control of the restored data to work properly.
(4) Client site encryption: when enabled, your files will be encrypted before being transferred to the cloud storage, and your data will remain encrypted in the cloud storage. The encryption key is derived from the password you enter for this job. Without your password to decrypt the file, your original data cannot be decrypted. This prevents unauthorized access to your confidential data even if your credentials of the cloud storage are compromised or if your cloud storage provider tries to access your data. As standard openssl is used for encrypting the files, you can use it to decrypt your files after you download the files using other utilities without using a Turbo NAS. Please note that you cannot change this setting after a job is created.
(5) File compression: when enabled, your files will be compressed before being transferred to the cloud storage, allowing you to save bandwidth and cloud storage capacity. It helps to speed up the backup job, reduce your bandwidth usage, and save your usable cloud space. You can configure the compression level and exclude compressing certain files. The compression is carried out using bzip, so you can decompress your files using bzip if you download the files using other utilities even without using a Turbo NAS.
(6) Detect sparse files: sparse files are files that do not have valid data in all of their content, and their logical size is larger than their physical size. Files generated by database servers tend to be sparse files in order to save space while simplifying its internal software design. Without this option enabled, the backup job will not check the validity of file contents, so the logical size will be the size of data to be transferred to and stored in the cloud storage. Enabling this option helps speed up the backup job, reduce bandwidth usage, and save your usable cloud space. The data in the cloud storage can be restored later to the local file system as the original sparse files by CloudBackup App restore jobs.
(7) Delete source after successful run: this option provides a simple data archival solution for you to save space on local disk volumes. When this option is on, files on the local file system that have been backed up to the cloud storage will be deleted, while the folder structures of the selected source will be retained.
To ensure the consistency of your backup data, please do not modify your data during the backup. A file will be re-uploaded several times if it is being modified when a job is transferring the file. If the number of re-uploading surpasses the limit, the file will not be uploaded. Similarly, if a file is moved, renamed, or deleted after the scanning, the backup job will not find it during the transfer.
Configuring the backup filters
The CloudBackup App provides various filtering options to help you only back up important data to speed up the backup job, reduce your bandwidth usage, and save your cloud expenses.
You can configure the following backup filtering options:
(1) File size: you can choose to only back up files of a specific range of file sizes.
(2) File date: you can choose to only back up files created/modified on specific dates.
(3) Ignore symbolic links (shortcuts): a backup job does not follow symbolic links to retrieve files or browse into targeted folders. When enabled, the backup job will not upload linked files, and when disabled, the backup job will upload the linked files (only the linked files, not the files the symbolic links are pointed to).
(4) Include hidden files and folders: when enabled, hidden files and folders will be backed up to the cloud. You may disable it to save bandwidth and cloud storage space, if the hidden files or folders can be derived from the original data (e.g., thumbnails are derived from photos and video files).
(5) File type: you can choose to back up only specific file types (whitelist) or backup all types of files excluding specific types (blacklist).
Configuring the cloud storage
You are presented with different settings depending on the cloud storage and account you choose. The following sections provide details about each cloud storage technology.
(1) Microsoft Azure
You must select a container and folder as the destination of your backup data. You can also use a new container and a new folder. One folder can be used only by one job. The block size defines the size of data unit to be transferred. The backup job will create one object in the cloud storage for each backed-up file.
The region of a storage account in Azure is determined when you create it. You can also configure a data protection level for your Azure storage account in the Azure Storage management console. If you choose to reduce the protection level, the reliability of your data will be reduced compared to the standard cloud storage, such that you can risk losing your data and you may be unable to restore some of your files.
(2) Amazon S3
You must select a bucket as the destination of your backup data. Selecting a folder is optional, but recommended because the amount of buckets you can create could be limited. You can also use a new bucket and a new folder. One folder can only be used by one job. Please note that bucket names are shared by all AWS accounts, so you can only create a bucket with a name that is not used by any AWS accounts. If you enter a bucket name owned by another AWS account, an error message will be displayed. The backup job will create one object in the cloud storage for each backed-up file. If you choose to use a new bucket, you can specify its region.
The multipart size defines the size of data unit to be transferred. Using larger multipart sizes can improve data transfer speeds, as fewer number of network connections are required for transferring a file. However, larger multipart sizes require retransmission of larger chunks of data when a connection is interrupted. Do not use large multipart sizes if your network is unstable. On the other hand, there is also an upper limit for the number of parts of single files you can upload depending on the cloud storage specification. For example, Amazon S3 allows up to 10,000 parts. If you choose to use 32MB as the multipart size for your job, the maximum size of single file will be 320,000 MB. There is also S3-compatbile cloud storage that does not support multipart uploading, such that the maximum file size can be only as large as the configured multipart size. For example, if you use 32MB as the multipart size for your job, the maximum size of a single file will be 32 MB. Please consult your cloud storage vendor for more information.
For the Amazon S3 service, you also have the options for Reduced Redundancy Storage (RRS) and Server Side Encryption (SSE). Turning on RRS helps you to save costs but the reliability of your data will be reduced compared to standard cloud storage, and you can risk losing your data and may be unable to restore some of your files. SSE is to encrypt your data on Amazon's servers. It provides basic protection for your data, but they can still be decrypted if someone has your Amazon S3 credentials. Please use client-side encryption in the backup policy if you need more secure protection.
The S3 Plus App does not support AWS S3 object lifecycle settings. Please do not enable this option for the bucket you choose for backup. The Glacier App is recommended if you want to back up your data to Glacier.
(3) Amazon Glacier
You are required to select a vault as the destination of your backup data. You can also use a new vault. One vault can be used only by one job. The backup job will create one archive in the cloud storage for each backed-up file. If you choose to use a new vault, you can specify its region.
The multipart size defines the size of data units to be transferred. Using a larger multipart size can improve data transfer speeds, as fewer number of network connections are needed for transferring a file. However, larger multipart sizes require retransmission of larger chunks of data when a connection is interrupted. Do not use larger multipart sizes if your network is unstable. On the other hand, Amazon Glacier supports up to 10,000 parts for single file uploading. For example, if you choose to use 32MB as the multipart size for your job, the maximum size of a single file will be 320,000 MB.
(4) Google Cloud Storage
You are required to select a bucket and folder as the destination of your backup data. You can also use a new bucket and a new folder. Only one folder can be used by one job. The backup job will create one object in the cloud storage for each backed-up file. If you choose to use a new bucket, you can specify its region.
The chunk size defines the size of data unit to be transferred. Using larger chunk sizes can improve data transfer speeds, because fewer number of network connections are needed to transfer a file. However, larger chunk sizes requires retransmission of larger chunks of data when a connection is interrupted. Do not use larger chunks if your network is unstable.
You can also enable Durable Reduced Availability (DRA) option to save costs, but the reliability of your data may be reduced compared to the standard cloud storage, and you can risk losing your data and may be unable to restore some of your files.
(5) OpenStack Swift
You are required to select a bucket as the destination of your backup data. Selecting a folder is optional, but recommended because the amount of buckets you can create could be limited. You can also use a new bucket and a new folder. Only one folder can be used by one job. The backup job will create one object in the cloud storage for each backed-up file. If you choose to use a new bucket, you can specify its region.
The block size defines the size of data unit to be transferred. Using larger block sizes can improve data transfer speeds, because fewer number of network connections are needed for transferring a file. However, larger block sizes requires retransmission of larger chunks of data when a connection is interrupted. Do not use larger block if your network is unstable.
On the other hand, there is also an upper limit for the number of blocks used when uploading single files depending on the cloud storage specification. For example, if you choose to use 32MB as the multipart size for your job, and the maximum number of blocks per file is 1,000, the maximum size of single file will be 32,000 MB. Please consult your cloud storage vendor for more information.
If the file names of your backup data contain multi-byte characters or special characters, you need to enable the name encoding option so that your data can be properly stored in the cloud storage. When you restore the data, the original file names will be restored by the CloudBackup App.
You are required to select a folder as the destination of your backup data. You can also use a new folder. If the file names of your backup data contain special characters, you need to enable the name encoding option such that your data can be properly stored in the cloud storage. When you restore the data, the original file names will be restored by the CloudBackup App.
6. Configuring the job name and options
In addition to naming your job, you can configure the following options:
(1) Timeout: the maximum number of seconds to wait for setting up a network connection. If your network is unstable or the connection setup time of your cloud storage is longer, please increase this number.
(2) Number of retries: the maximum number of retries a job will perform for setting up a network connection or process a file. If your network is unstable or if your files may be modified during the backup, please increase this number. Please note that if your cloud storage service can be connected to but it reports internal errors, the backup job will perform infinite retries in order to salvage this case.
(3) Retry interval: the number of seconds to wait before a job retries a connection. Using a longer retry interval time can help your job survive transient network connectivity errors or cloud storage availability issues.
(4) Maximum skipped files: the maximum number of files that cannot be transferred. If the number is exceeded, the job will be stopped. It is recommended to use a large number to ensure the success of your job. However, you can use a smaller number if you do not want to prolong a failing job.
(5) Concurrent processing files: the maximum number of files that can be simultaneously processed and transferred by a job. You can get better transfer speeds when you allow more files to be processed at the same time. However, this also consumes more system resources. Reducing this number can reduce your system load and minimize the performance impact on regular system activities.
(6) Cloud storage usage warning threshold: you can get warning messages if too much cloud storage capacity is used by the backup job. This helps to control the cost of your cloud storage and to avoid mistaken configurations of your job.
(7) Job execution time warning threshold: you can get warning messages if your job takes too long. This helps to detect problems like network instabilities or slowdown.
7. Confirming the settings
You can review all of your settings here. To change any settings, use the [Back] button to return to previous pages.
Your job has been created once you have entered this page.
Editing a backup job
You can change the settings of a backup job by clicking on the [Edit] button in the [Action] column of the backup job list. However, the changes will only be applied after the job is executed again. Please also pay attention to the following notes:
- Changing the backup destination: if you change the backup destination, such as using another cloud account or using another folder, all of your data must be backed up again.
- Changing the job name: if you change the job’s name, you will be unable to see previous event logs in the job’s log viewer as the logs are recorded using the job’s name as keyword.
- Changing the source or filtering: if you have files backed up to the cloud storage and you exclude them later on by editing selected sources or the filtering criteria, those files will be marked as deleted in the cloud storage, even if their local copy is still available.
Changing policies or cloud storage options: most options cannot be applied to files backed-up in the cloud storage. For example, if you change the compression level from low to high, the files backed up using low compression level will remain unchanged. They will be replaced with files of higher compression level only if their local copy is uploaded to the cloud storage again. To force renewing the backed-up files in the cloud storage, please run a full backup by disabling the “Only backup updated files” option.
Viewing backup job status
You can examine detailed status and logs of a backup job by clicking on the [View Logs] button in the [Action] column of the backup job list.
Status: you can see job’s status and comprehensive statistics here. Please pay attention to the following when reading the statistics:
(1) Before the scanning is done, many statistics will be unavailable.
(2) The difference in the number of files between the scanned and the to-be-transferred is caused by filtering and incremental backup. For example, there could be 500 files in the selected folders, but only 5 files are modified and need to be backed up.
(3) The difference in the number of files between the to-be-transferred and the transferred is caused by failing to uploading the file to the cloud storage. It could be because some files are under modifications during the data transfer and get skipped. It could also be because the cloud storage is not accessible when backing up the files.
(4) There could be difference in the size of files even if the number of files is the same. It could be because the files are modified after the scanning or the files are compressed before uploading to the cloud storage.
(5) If the error count exceeds the maximum number of skipped file, the job will stop.
Events: you can see job’s event history here. It also includes the errors that have occurred during the job execution.
Transfer: you can see the list of files being transferred at this moment.
History: you can see the list of historical job executions.
Deleting a backup job
You can delete a backup job by clicking on the [Delete] button in the [Action] column of the backup job list. However, the backed-up data in the cloud storage will be retained regardless of deleting the backup job. You can enable the “Clean up deleted data on the cloud” option in the backup and change its source folder to an empty folder. When the backup job runs, it will delete the relevant data in the cloud. An alternative is to use the management console of your cloud storage or other utilities to delete the backup data.
The CloudBackup App allows you to create a restore job to restore data either from a backup job or from a specified location on the cloud storage. When restoring from a backup job, the restore job will use the metadata database associated with the backup job to retrieve the data from the cloud. The metadata database provides the information about where to download the files and the names and attributes for the backed-up data. When restoring from the cloud, a restore job needs to retrieve the metadata database from the cloud storage first, and then follow the information in the metadata database to download the files.
Viewing restore jobs
You can create multiple restore jobs with different configurations and execution timing settings in the CloudBackup App. The restore jobs are presented in a list with the status and basic information. You can press the [Action] buttons to manually start/stop a job, to edit job settings, view job status, and to delete a job.
Creating a restore job
The CloudBackup App helps you to create a backup job using a wizard with the following steps:
Selecting the source
You can create a job to restore data based on one of the backup jobs in the same Turbo NAS or from the cloud storage.
(1) From backup job
If you choose to restore from a backup job, please select the backup job. You use this to restore deleted data and previous versions, or to restore archived data. Your restore can be done properly only if your backup job has finished successfully. You can select which folders to restore in the next step, because the restore job can use the metadata stored locally to present the contents of your backup data. If you enable “Skip deleted files/folders”, the backup data marked as deleted will not be restored.
(2) From cloud
If you want to restore data backed up from another Turbo NAS, please select to restore from the cloud. You use this for disaster recovery or for migrating data from other Turbo NAS. However, if there is no metadata exist locally on the NAS, you will be unable to select which folders to restore, and all the backup data will be restored. You can select “metadata-only” when you create a restore job to restore the metadata, and after the job is done you can edit the job and select which folders to restore. The two-step restore procedure helps you to prioritize the data to be restored. If you enable “Skip deleted files/folders”, the backup data marked as deleted will not be restored.
Please also note the following regarding restoring from the cloud:
- Data access permissions are preserved by user ID and group ID, not the names, so proper mapping between IDs and names are required for the access right controls to work properly.
- CloudBackup App downloads only the files that were backed up to the cloud storage by the CloudBackup App. A metadata database is uploaded to cloud storage for each backup job, and a restore job uses the database to recognize the backed-up files in the cloud storage, instead of downloading all files in the cloud storage. So if you use other utilities to store data in the destination of a backup job, you will be unable to retrieve the data by restore jobs.
- The CloudBackup App does not currently support restore single files. However, as the backup data in the cloud storage is organized according to its local folder structure and path names, you can use other utilities to browse the backup data to download the files you need. However, you will need to manually decompress and decrypt the data after the files are downloaded.
- As you will need to download the metadata database file first and then the backed-up files later, while it takes 4-5 hours to download a file from Glacier, you need to wait for 8-10 hours before you start having some files restored.
- As Amazon updates the vault inventory approximately once a day, after backup job is finished you need to wait a minimum of one day before you can successfully restore your data from the cloud.
- Amazon offers free retrieval quotas per month. You can use transfer rate controls or restore only selected folders to control your budget.
- As Glacier is a special type of cloud storage, please ensure you are familiar with Glacier by visiting its FAQ page: http://aws.amazon.com/glacier/faqs/.
Selecting the folders
You can select which folders to restore if you have the metadata locally available.
Selecting the destination of the restored data
You can select to restore data back to its original locations or to another designated folder. Please note that restoring data back to its original location only work if data is restored to the same NAS and if all root shared folders still exist.
Configuring the restore schedule and options
You can configure the schedule and various restore options here for the restore job. Most options are the same as backup options, such as error handling. For restore, you need to select how the job handles existing data. You can skip restoring files, rename existing files, or overwriting existing files.
Configuring the job name
The final step is to configure the job’s name.
Confirming the configurations
You can review all of your settings here. To change any settings, use the [Back] button to return to previous pages.
Your job has been created once you have entered this page.
Editing a restore job
You can change the settings of a restore job by clicking the [Edit] button in the [Action] column of the restore job list. However, the changes will be applied only after the job is executed again.
You can choose to restore data only for a backup job that has finished restoring its metadata database, and you can select which folders in your backup data for restore. If your backup data in the cloud storage has changed, you need to restore the metadata again to get updated data.
Viewing restore job status
You can examine detailed status and logs of a restore job by clicking on the [View Logs] button in the [Action] column of the restore job list. Similar to backup jobs, you can see a restore job’s status, events, transferred files, and history.
Deleting a restore job
You can delete a restore job by clicking on the [Delete] button in the [Action] column of the restore job list. Deleting a restore job does not change your backup/restored data.
Transfer rate control
The CloudBackup App allows you to configure data transfer limits for each account. The maximum rate for uploading and downloading can be configured separately. You can also configure the time interval and the days of the week to apply the limits. For example, you can enable it only during office hours on working days to avoid saturating your Internet bandwidth and reducing the performance impact on regular Internet usage. It also helps to reduce the cost of your cloud services if you are offered a free quota of data access within a period of time.
The S3 Plus App supports HTTP proxy. Please press the button with wheel icon on the top-right corner of the App window to open the settings window. You can use either the proxy settings of your system or independent proxy settings for your S3 Plus App. However, please ensure your proxy server supports the following requirements:
- Not a SOCKS proxy server
- Supports HTTPS.
- Supports file uploading
- Supports a long-time connection
- Fully compliant with HTTP 1.1
The CloudBackup App internally generates debug logs for facilitating troubleshooting. Please click on the [i] button on the top-right corner of the App’s window. A small window will show with the version information as well as a [Generate Debug Report] button. Press the button to download the debug log file for sending it to the QNAP Helpdesk with your problem details.
Cloud storage utilities
In addition to using the CloudBackup App to access your data in the cloud storage, there are also many applications that can help you manage your data. Some are listed below for your reference:
- Cloudberry Lab (http://www.cloudberrylab.com/): S3, Glacier, Azure, Google Cloud, and OpenStack browser for Windows.
- Cyberduck (https://cyberduck.io/): S3, Azure, Google Cloud, OpenStack, WebDAV browser for Windows and Mac
- Duplicati (http://www.duplicati.com/): S3 and OpenStack backup client on Windows and Linux
- NetDrive (http://www.netdrive.net/): Mount WebDAV server as a disk drive on Windows.
- S3 Browser (http://s3browser.com/): S3 Windows client
- S3cmd (http://s3tools.org/): Command Line Client Software for S3 in Linux
- Cloud Explorer (https://github.com/rusher81572/cloudExplorer): An S3 browser for Windows, Mac, and Linux
- s3fs (https://code.google.com/p/s3fs/): A FUSE-based file system backed by S3. Mount a bucket as a local file system read/write.