Start a new topic

Scripting with LucidLink, a backup to the cloud concept

Use S3 bucket as production file space and for backups to take advantage of our compression and S3 price benefits, plus incur no egress fees running in the same EC2/S3 regions.


Create EC2 Linux instance within the same region as both buckets (or different regions for greater availability incurs egress) - always at least have an instance residing in the region of 1 of the buckets (no use wasting the Internet with all 3 resources separate).


Within the portal create both production (fs1_prod) and backup (fs1_bkup) equivalent file spaces, these file spaces could be again located within the same vendor, or different vendors for true disaster recovery ie. AWS to 3rd party cloud provider (egress fees apply) OR on-premise to AWS (free) - or Wasabi to AWS (free).


Link multiple file spaces and mount file spaces to their own folders:


Lucid --instance 1 daemon

Lucid --instance 2 daemon


Lucid --instance 1 --fs fs1_prod --password <secret> --mount-point ~/fs1_prod

Lucid --instance 2 --fs fs1_bkup --password <secret> --mount-point ~/fs1_bkup


Leverage rsync to periodically sync data which has changed between production and backup file spaces:


rsync -avP --delete ~/fs1_prod/* ~/fs1_bkup


Schedule backups once or twice a day through Crontab (or AWS Instance Scheduler). Introduce additional file space instance id's to maintain more than one point in time or direct backup data to multiple subfolders maintaining multiple copies in the same backup files space mount point.



zip
Login or Signup to post a comment