Archive

Articles taggués ‘backup’

Rsync : Sync Files/Directories

21/07/2017 Aucun commentaire

Copy files or directories from one location to an another host by rsync.

If you’d like to set rsync automatically by cron or others, it need to configure like follows because authentication is required without settings. For example, Copy files or directories under the [/root/work] on dlp.srv.world to [/home/backup] on www.srv.world.

[1] Configure on source host.

root@dlp:~# apt-get -y install rsync
root@dlp:~# vi /etc/rsync_exclude.lst
# specify files or directories you'd like to exclude to copy
test
test.txt
[2] Configure on destination host.

root@www:~# apt-get -y install rsync
root@www:~# vi /etc/default/rsync
# line 8: change
RSYNC_ENABLE=true
root@www:~# vi /etc/rsyncd.conf
# create new
# any name you like
[backup]
# destination directory to copy
path = /home/backup
# hosts you allow to access
hosts allow = 10.0.0.30
hosts deny = *
list = true
uid = root
gid = root
read only = false
root@www:~# mkdir /home/backup
root@www:~# systemctl start rsync
[3] It’s OK. Execute rsync on Source Host like follows.

root@dlp:~# rsync -avz --delete --exclude-from=/etc/rsync_exclude.lst /root/work/ www.srv.world::backup
# Add in cron if you'd like to run reguraly
root@dlp:~# crontab -e
# for example, run at 2:00 AM in a day
00 02 * * * rsync -avz --delete --exclude-from=/etc/rsync_exclude.lst /root/work/ www.srv.world::backup
Categories: Système Tags: , , ,

Better than Time Machine: backup your Mac with rsync

08/07/2016 Comments off

Backup your Mac with rsync

Using rsync: We all know we need to make backups. Apparently, 30% of all computer users lose all of their files sometime in their life. Not a pretty foresight.

Fortunately, Mac Leopard users have a program called Time Machine that makes things a lot easier. But is Time Machine the perfect backup solution? I don’t think so. There are a couple of things that make Time Machine very unsuitable for me:

  • You need to get a seperate external hard drive that can only be used for Time Machine (and has to be formatted first)
  • That drive has to be formatted in HFS+, hence, without any (commercial) third-party plugins it’s not readable on Windows or Linux systems
  • You have to leave your drive on all the time to make sure Time Machine makes backups
  • You can’t make a list of things you want to have backed up, you can only exclude folders from your complete hard disk
  • Time Machine makes an exact copy of your hard drive

time-machine_2_thumb800Especially that last ‘feature’ is very irritating to me. I have an external drive with about 300G of files, including lots of music and video files. My MacBook drive is only 80GB big, so i can never have the complete contents of my external drive on my MacBook. Let’s say i have 10GB of MP3 files, which i backup with Time Machine, then i remove about 5GB of files from my MacBook to free some space. What happens when the next backup round is happening? Exactly, the 5GB of files get deleted from the external disk as well. When i want to play a certain MP3 file from my external drive i now have to ‘restore’ and ‘look back in history’ to find it. Not very user-friendly.

Luckily, there is a very good (free) alternative to Time Machine that does exactly what i want with backups: it lets you specify which folders you want to backup, it doesn’t delete things on the backup drive when you delete files from your original drive, and it’s compatible with any external drive and can even backup files over a network. This piece of software is called rsync. Here’s how to use it.

rsync is a command-line utility shipped with every copy of Mac OS X. It originated from the UNIX/Linux world, where it has been part of most Linux distributions for many years. rsync is reliable, fast, and easily configurable. Try running it by opening up the Terminal.app (located in your Applications/Utilities folder) and running the command:

rsync

You’ll get an overview of all possible options. In essence the syntax is very simple:

rsync OPTIONS SOURCE DESTINATION

What you’ll probably want is a one-way transfer of all files in SOURCE to DESTINATION, where only files are copied that are not available on the DESTINATION disk or different. Aside from that you’ll want to include all subdirectories, links, permissions, date/time, groups, owner and devices. To do that simply use this easy-to-remember option list:

rsync -rlptgoD

Ha, just kidding! Fortunately there is another switch that does all of that with one switch, namely the archive switch:

rsync -a

So, let’s say you want to backup the files in your Documents directory to your external harddrive, which you appropriately named ‘backup’, then this would be the command:

rsync -a ~/Documents/ /Volumes/backup/Documents

For those of you who don’t use the Terminal very often: the tilde (~) is a shortcut for your home directory. If, for example, your name would be ‘Alice’ your home directory would probably be

/Users/alice

Lire la suite…

Do-It-Yourself Backup System Using Rsync

08/07/2016 Comments off

What is rsync?

rsync-terminalRsync is a program for synchronizing two directory trees across different file systems even if they are on different computers. It can run its host to host communications over ssh to keep things secure and to provide key based authentication. If a file is already present in the target and is the same as on the source the file will not be transmitted. If the file on the target is different than the one on the source then only the parts of it that are different are transferred. These features greatly increase the performance of rsync over a network.

What are hard links?

Hard links are similar to symlinks. They are normally created using the ln command but without the -s switch. A hard link is when two file entries point to the same inode and disk blocks. Unlike symlinks there isn’t a file and a pointer to the file but rather two links to the same file. If you delete either entry the other will remain and will still contain the data. Here is an example of both:

  ------------- Symbolic Link Demo -------
  % echo foo > x
  % ln -s x y
  % ls -li ?
  38062 -rw-r--r--  1 kmk users 4 Jul 25 14:28 x
  38066 lrwxrwxrwx  1 kmk users 1 Jul 25 14:28 y -> x
  -- As you can see, y is only a pointer to x.
  % grep . ?
  x:foo
  y:foo
  -- They contain the same data.
  % rm x
  % ls -li ?
  38066 lrwxrwxrwx  1 kmk users 1 Jul 25 14:28 y -> x
  % grep . ?
  grep: y: No such file or directory
  -- Now that x is gone y is simply broken.
  ------------ Hard Link Demo ------------
  % echo foo > x
  % ln x y
  % ls -li ?
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:28 x
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:28 y
  -- They are the same file occupying the same disk space.
  % grep . ?
  x:foo
  y:foo
  -- They contain the same data.
  % rm x
  % ls -li ?
  38062 -rw-r--r--  1 kmk users 4 Jul 25 14:28 y
  % grep . ?
  y:foo
  -- Now y is simply an ordinary file.
  ---------- Breaking a Hard Link ----------
  % echo foo > x
  % ln x y
  % ls -li ?
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:34 x
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:34 y
  % grep . ?
  x:foo
  y:foo
  % rm y ; echo bar > y
  % ls -li ?
  38062 -rw-r--r--  1 kmk users 4 Jul 25 14:34 x
  38066 -rw-r--r--  1 kmk users 4 Jul 25 14:34 y
  % grep . ?
  x:foo
  y:bar

Why backup with rsync instead of something else?

  • Disk based: Rsync is a disk based backup system. It doesn’t use tapes which are too slow to backup (and more importantly restore) modern systems with large hard drives. Also, disk based backup solutions are much cheaper than equivalently sized tape backup systems.
  • Fast: Rsync only backs up what has changed since the last backup. It NEVER has to repeat the full backup unlike most other systems that have monthly/weekly/daily differential configurations.
  • Less work for the backup client: Most of the work in rsync backups including the rotation process is done on the backup server which is usually dedicated to doing backups. This means that the client system being backed up is not hit with as much load as with some other backup programs. The load can also be tailored to your particular needs through several rsync options and backup system design decisions.
  • Fastest restores possible: If you just need to restore a single file or set of files it is as simple as a cp or scp command. Restoring an entire file system is just a reverse of the backup procedure. Restoring an entire system is a bit long but is less work than backup systems that require you to reinstall your OS first and about the same as other manual backup systems like dump or tar.
  • Only one restore needed: Even though each backup is an incremental they are all accessible as full backups. This means you only restore the backup you want instead of restoring a full and an incremental or a monthly followed by a weekly followed by a daily.
  • Cross Platform: Rsync can backup and recover anything that can run rsync. I have used it to backup Linux, Windows, DOS, OpenBSD, Solaris, and even ancient SunOS 4 systems. The only limitation is that the file system that the backups are stored on must support all of the file metadata that the file systems containing files to be backed up supports. In other words if you were to use a vfat file system for your backups you would not be able to preserve file ownership when backing up an ext3 file system. If this is a problem for you try looking into rdiff-backup.
  • Cheap: It doesn’t seem like it would be cheap to have enough disk space for 2 copies of everything and then some but it is. With tape drives you have to choose between a cheap drive with expensive tapes or an expensive drive with cheap tapes. In a hard drive based system you just buy cheap hard drives and use RAID to tie them together. My current backup server uses two 500GB IDE drives in a software RAID-0 configuration for a total of 1TB for about $100 which is about 1/6th what I paid for the DDS3 tape drive that I used to use and that doesn’t even include the tapes that cost about $10/12GB.
  • Internet: Since rsync can run over ssh and only transfers what has changed it is perfect for backing up things across the internet. This is perfect for backing up and updating a web site at a web hosting company or even a co-located server. Internet based backup systems are also becoming more and more popular. Rsync is the perfect tool to backup to such services over the internet.
  • Do-it-yourself: There are FOSS backup packages out now that use rsync as their back end but the nice thing here is that you are using standard command line tools (rsync, ssh, rm) so you can engineer your own backup system that will do EXACTLY what you want and you don’t need a special tool to restore.

Lire la suite…

How to enable incremental file sync for many users on Linux

06/07/2016 Comments off

Suppose you as a software developer has set up daily builds of your software for testing purposes. Every day you make a new build, users have to re-download the updated build to evaluate it. In this case you may want to enable differential downloads, so that users can download only difference between two builds, thereby saving on the server’s bandwidth. Users will also be happy as they don’t have to wait to re-download the whole thing. Similar situations are encountered when you want to set up a download archive which allows incremental sync for users.

In these cases, how would you distribute incrementally updated files efficiently for multiple users? In fact, there are open-source storage solutions that come with « delta sync » capability built-in, such as ownCloud or Syncthing. These kinds of full-blown solutions with built-in GUI require users to install a dedicated client, and thus may be an overkill for simple file distribution that you are looking for.

Barring full-blown third-party software like these, perhaps rsync may come to mind, which can do bandwidth-efficient file sync. The problem with rsync, however, is that all the heavy duty computations is done at the server side. That is, when a client requests for a file sync, the server needs to perform block-by-block checksum computation and search for blocks not available on the client. Obviously this procedure can place a significant strain on the server’s resources if the server needs to handle many users, and thus is not scalable.

What is Zsync?

This is when a command-line tool called zsync comes in handy. While zsync uses the same delta-encoding based sync algorithm as rsync does, it moves the heavy duty computation away from the server and onto the clients. What do I mean by that?

Well, in zsync, the server maintains a separate .zsync metadata file for a file to distribute, which contains a list of « precomputed » checksums for individual blocks of the file. When zsync client requests for a file sync, the client downloads .zsync metadata file first, and performs block-by-block checksum calculation to find missing blocks on its own. The client then requests for missing blocks using HTTP range requests. As you can see, the server is totally out of the loop from the sync algorithm, and simply serves requested file blocks over HTTP, which makes it ideal when incremental file sync is needed for many users.

Here is a quick rundown on the server-side overhead difference between rsync and zsync. In the plot below, I compare rsync and zsync in terms of the server’s CPU usage when 200 users are downloading a tarball file with 2.5% discrepancy of a previous version. For fair comparison, SSH is not used for rsync.

With zsync, since all checksum computation overhead has shifted from the server to individual clients, the server overhead is reduced dramatically. A small neat idea makes zsync a real winner!

In the rest of the tutorial, I will show how to distribute a file incrementally using zsync under the Linux environment.

Zsync: Client Side Setup

On the client side, you need to install zsync to initiate file transfer from a remote web server. zsync is extremely lightweight, and is included in the most Linux distros. Here is how to install zsync on various distros.

Debian, Ubuntu or Linux Mint:

$ sudo apt-get install zsync

Fedora:

$ sudo yum install zsync

CentOS or RHEL:

First, enable Repoforge repository, and then run:

$ sudo yum install zsync

Arch Linux:

$ sudo pacman -S zsync

Zsync: Server Side Setup

On the server side, pretty much all you need an HTTP/1.1-capable web server. Any commodity web server will do. For each file you want to distribute with zsync, you then need to prepare a .zsync metadata file.

You can create a .zsync file using zsyncmake command-line tool which comes with zsync package. Below are the steps to create a .zsync metadata file. Follow these steps on any Linux computer where zsync is installed.

Assume that you want to distribute a file custom.tar.gz which is already compressed with gzip.

First, uncompress custom.tar.gz. If the file is not compressed, you can skip this step.

$ gunzip custom.tar.gz

Next, compress the file with zsyncmake with « -z » option.

$ zsyncmake -z custom.tar

It will compress the input file in gzip format (custom.tar.gz), and create a corresponding .zsync metadata file (custom.tar.zsync).

The reason why we compress it with zsyncmake, not with gzip is that zsyncmake-compression makes a compressed file incrementally updatable.

Finally, upload custom.tar.gz and custom.tar.zsync to a host where a web server is running. Make sure that upload both files to the same directory.

Incrementally Downloads a File with Zsync

Once both the server and the client are ready, you can go ahead and run zsync on the client as follows.

$ zsync http://192.168.1.239/custom.tar.zsync

Note that you should point to the URL of the .zsync metadata file, not the tarball to download.

The first time download will fetch the whole tarball from the server.

Over time if the remote tarball gets updated, you can use the same command to re-download the updated file.zsync will automatically detect the change, and perform delta download.

Obviously you need to re-run zsync in the same directory where the previously downloaded tarball resides. The old tarball will be renamed (with .zs-old extension), and the newer tarball will be constructed in the current directory.

If the previously downloaded tarball is somewhere else, you can use « -i » option to specify the location explicitly.

$ zsync -i /backup/custom.tar http://192.168.1.239/custom.tar.zsync

Conclusion

In this tutorial I show how to use zsync to download a remote file incremently, while putting little overhead on the server. There are places where zsync is already used (e.g., Ubuntu daily ISO images or Debian live ISO images). Now you know what to do when you see .zsync files.

 

Source: Xmodulo

Categories: Logiciel Tags: , , ,

How to Backup Linux? 15 rsync Command Examples

05/04/2016 Comments off

rsync stands for remote sync.

rsync is used to perform the backup operation in UNIX / Linux.

rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.

Important features of rsync

  • Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
  • Security: rsync allows encryption of data using ssh protocol during transfer.
  • Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
  • Privileges: No special privileges are required to install and execute rsync

Syntax

$ rsync options source destination

Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.

Example 1. Synchronize Two Directories in a Local Server

To sync two directories in a local computer, use the following rsync -zvr command.

$ rsync -zvr /var/opt/installation/inventory/ /root/temp
building file list ... done
sva.xml
svB.xml
.
sent 26385 bytes  received 1098 bytes  54966.00 bytes/sec
total size is 44867  speedup is 1.63
$

In the above rsync example:

  • -z is to enable compression
  • -v verbose
  • -r indicates recursive

Now let us see the timestamp on one of the files that was copied from source to destination. As you see below, rsync didn’t preserve timestamps during sync.

$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml
-r--r--r-- 1 bin  bin  949 Jun 18  2009 /var/opt/installation/inventory/sva.xml
-r--r--r-- 1 root bin  949 Sep  2  2009 /root/temp/sva.xml

Example 2. Preserve timestamps during Sync using rsync -a

rsync option -a indicates archive mode. -a option does the following,

  • Recursive mode
  • Preserves symbolic links
  • Preserves permissions
  • Preserves timestamp
  • Preserves owner and group

Now, executing the same command provided in example 1 (But with the rsync option -a) as shown below:

$ rsync -azv /var/opt/installation/inventory/ /root/temp/
building file list ... done
./
sva.xml
svB.xml
.
sent 26499 bytes  received 1104 bytes  55206.00 bytes/sec
total size is 44867  speedup is 1.63
$

As you see below, rsync preserved timestamps during sync.

$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml
-r--r--r-- 1 root  bin  949 Jun 18  2009 /var/opt/installation/inventory/sva.xml
-r--r--r-- 1 root  bin  949 Jun 18  2009 /root/temp/sva.xml

Lire la suite…

Categories: Réseau, Système Tags: ,

Simple versioned TimeMachine-like backup using rsync

07/03/2016 Comments off

time machine rsyncUsing TimeMachine with rsync: Over many years, I have dealt with scripts that do backup versioning, i.e., maintain multiple backups. Due to their flexibility, they have been complex to understand and configure. Here is a simple rsync-based tool with a different focus: The experienced systems administrator who wants to keep his system’s complexity down.

Backup in action: TimeMachine and rsync

It consists of a simple script, which you can call rsync-backup.sh and store wherever you like, e.g., in /usr/local/sbin. I will use these names and paths in the examples.

#!/bin/sh
# Usage: rsync-backup.sh <src> <dst> <label>
if [ "$#" -ne 3 ]; then
    echo "$0: Expected 3 arguments, received $#: $@" >&2
    exit 1
fi
if [ -d "$2/__prev/" ]; then
    rsync -a --delete --link-dest="$2/__prev/" "$1" "$2/$3"
else
    rsync -a                                   "$1" "$2/$3"
fi
rm -f "$2/__prev"
ln -s "$3" "$2/__prev"

During normal operation, it boils down to three simple statements:

  1. rsync with --link-dest: Copying the contents of <src> to <dst>/<label>, reusing the files from the previous backup with hard links ((The non---link-dest rsync does not use --delete to reduce the risk of accidentally deleting files when called with wrong parameters))
  2. rm and ln: Remember this backup location for the next incremental backup.

Voilà – it doesn’t get much easier than that!

Of course, there is something missing: The actual backup policy. It is separated into cron, which I consider an advantage. Using this separation of duties, many policies can be implemented very easily and composed in a modular way:

Create daily backups  for every weekday

You might know this from automysqlbackup or autopostgresqlbackup: A backup is created every day and overwritten after 7 days. This is achieved by adding the following file to /etc/cron.daily/:

#!/bin/sh
/usr/local/bin/rsync-backup.sh /home /data/backup `date +%A`

All your user’s files are copied daily to /data/backup, named after the current day, overwritten weekly.

Daily backups for a month

Sure, this is easy as well, by putting this with a descriptive name into/etc/cron.daily/:

#!/bin/sh
/usr/local/bin/rsync-backup.sh /home /data/backup `date +Day-%d`

Lire la suite…

Categories: Système Tags: , , , ,

Sauvegarde MySQL

03/02/2016 Comments off

sauvegarde mysqlSauvegarde MySQL

Pour sauvegarder une base de données (sans et avec compression) :

# mysqldump NOM_BASE > NOM_FICHIER
# mysqldump NOM_BASE | gzip > NOM_FICHIER

Pour restaurer une base de données (sans et avec compression) :

# mysqladmin create NOM_BASE
# mysql NOM_BASE < NOM_FICHIER
# gunzip < NOM_FICHIER | mysql NOM_BASE

Sauvegarder toutes les bases :

# mysqldump --opt --all-databases > NOM_FICHIER

Pour sauvegarder uniquement certaines tables :

# mysqldump NOM_BASE NOM_TABLE0 [NOM_TABLE1...] > NOM_FICHIER

Pour presque faire un « –exclude » (qui manque cruellement à mysqldump):

mysql -B -N -e 'show databases' | 
  perl -ne 'print unless /b(?:phpmyadmin|mysql|information_schema)b/' | 
  xargs echo mysqldump -B

Et pour sauvegarder des tables correspondant à un motif (préfixe le plus souvent) :

# mysqldump NOM_BASE $(mysql NOM_BASE -B --column-names=False -e "show tables like 'exemple_%'") > NOM_FICHIER

Pour dumper avec une condition particulière :

mysqldump -t <base> <table> --where="my_id='66666666'"

Ce qui permet de réinjecter des données résultantes d’un SELECT * FROM base.table WHERE my_id='66666666'.

Il est évidement possible de faire toutes ces opération sur une instance en précisant son port avec l’option –port (valable pour mysqldump et mysql).

Pour obtenir une liste des utilisateurs mysql, on peut utiliser cette fonction (glanée sur serverfault) :

mygrants()
{
  mysql -B -N -e "SELECT DISTINCT CONCAT(
    'SHOW GRANTS FOR ''', user, '''@''', host, ''';'
    ) AS query FROM mysql.user" | 
  mysql | 
  sed 's/(GRANT .*)/1;/;s/^(Grants for .*)/## 1 ##/;/##/{x;p;x;}'
}

Lire la suite…

HowTo: Make Ubuntu a Perfect Mac File Server and Time Machine Volume

13/01/2016 Comments off

ubuntu time machineFor quite some time I use my Ubuntu machine as a file and backup server for all Macs in my network which is perfectly accessible from the Finder in Mac OS X. There are some instructions available in the web for this task but all failed in my case so I wrote my own tutorial with all the steps needed for it to work properly.

So here’s my little Tutorial for connecting Mac OS X Leopard with Ubuntu and using your Ubuntu machine as a backup volume for Time Machine but all steps can be reproduced on every Linux box and they work with Mac OS X 10.4 Tiger too. At the end of this tutorial you will have a server which shows up in the Finder sidebar and behaves just like a Mac server when accessing it from your Macs. To be perfectly integrated with Mac OS X we’re going to use Apple’s Filing Protocol (AFP) for network and file sharing.

Although this Tutorial involves using the Terminal in Ubuntu and looks a bit geeky it’s very easy even for beginners. I have tried to explain all steps and Terminal commands so you may learn a bit about the Terminal too. At the end of the article you can download my Server Displays icon pack quickly made by me with custom icons for a Mac, Ubuntu and Windows server.

Personally I use a fresh installation of Ubuntu 8.04 Hardy Heron Desktop version (32bit on one machine, 64bit on the other) and Mac OS X Leopard (10.5.3 and later) to connect to them. On my Ubuntu boxes there’s no other file sharing protocol like samba (Windows sharing) or NFS activated.

Rumors are Apple will add some undocumented AFP commands with the Mac OS X 10.5.6 update which therefor won’t be supported by the current Netatalk package (and maybe never will). So be sure to check the latest comments on this article when the 10.5.6 update is out to see if this rumor is true and if there are problems caused by that.

Here are the steps involved in setting up your Ubuntu box as a Mac file server:

  1. Modify and install Netatalk (Open Source AFP implementation)
  2. Configure Netatalk
  3. Configure shared volumes (and Time Machine volume)
  4. Install Avahi (Open Source Bonjour implementation)
  5. Configure Avahi and advertise services
  6. Configure TimeMachine
  7. Conclusion, Problems and more informations
  8. Downloading and using the Server Display Icons
  9. Translations Of This Article

Lire la suite…

Categories: Système Tags: , , ,

Guide to rsnapshot and incremental backups on Linux

30/12/2015 Comments off

1. Introduction

rsnapshot is a backup tool written in Perl that utilizes rsync as its back-end. rsnapshot allows users to create customized incremental backup solutions. This article will discuss the following: the benefits of an incremental backup solution, rsnapshot’s installation, its configuration, and usage examples.

2. Back-it up!

I was recently discussing with a colleague the benefits of backing up your data. My colleague was telling me how one of her customers had recently lost a rather lengthy article that they had been working on. I decided that this may be a good chance to experiment with my netbook and rsnapshot. For this tutorial, I’ll assume you have have 2 pieces of hardware: your host computer, and your destination equipment. I’ll be using an external hard drive for the majority of this post. However, I will briefly cover usage for backing up files over a LAN.

Backing up your data should not be the question to ask but rather how should I backup my stuff? What’s the best way? Well there are many different backup pathways you can take, including block level (dd, partimage), partition level (RAID and all its variations), file level (rsyncand its children applications). I’ll discuss two types of backups in the context of file-based backups.

Normal backups, or full backups, are self explanatory. Normal backups are one way of backing up ALL your files every time you perform a backup. One issue with utilizing a multiple normal backup scheme is that a normal backup takes up a considerable amount of space. For example, if you perform a full backup of a 250gig hard drive at 20% capacity, everyday for just one week (assuming that the amount of data does not fluctuate) will mean that you already have used 350gigs for only one week’s worth of backups. As you can see, that is not feasible in the long run. The other method that I prefer is the incremental backup method. An incremental backup consists of one full backup and then performing additional backups. These additional backups will only backup files that  have changed since the last backup. Instead of backing up your entire hard drive, only the specific files that have changed since the last backup are backed up. As you can probably imagine this is a much more efficient process. One tool that does this on *nix is rsnapshot.

3. What is rsnapshot?

rsnapshot, as mentioned earlier, is an incremental backup utility. In this tutorial, I will show you how to create a seven day rotation backup scheme using rsnapshot. Essentially, rsnapshot will create 1 full backup and then subsequent backups will backup only files that have changed. The true power of rsnapshot is its ability to utilize hard-links between each backup. Each backup will appear to be a full backup. In reality each new backup consists of newly created or updated files. rsnapshot can be used over a LAN and can also be ran from cron. In this tutorial, I’ll show both usage examples.

4. Installation

Installation of rsnapshot is pretty straightforward, simply run the following command:

On Debian (or Ubuntu):

apt-get install rsnapshot

On Fedora:

yum install rsnapshot

On ArchLinux:

pacman -S rsnapshot

Now let’s configure rsnapshot.

Lire la suite…

Rsnapshot

30/12/2015 Comments off

Introduction

Vous le savez maintenant, les sauvegardes sont indispensables… Sauvegardes. Nécessaires, mais facile à oublier, sauf si elles sont effectuées automatiquement.

Voici un tutorial qui décrit la procédure pour mettre en place une solution de sauvegarde automatique simple basée sur rsnapshot.

Rsnapshot est un script écrit en perl.

Il utilise Rsync (et ssh si vous le souhaitez) pour effectuer des sauvegardes à intervalle régulier.

Il est capable de réaliser des sauvegardes d’un systèmes de fichier ou bien de bases de données par l’intermédiaire de scripts.

Un des principaux avantages de rsnapshot est son extrême simplicité.

rsnapshot utilise les « hard link unix » pour :

  • Éviter de dupliquer inutilement les fichiers.
  • Faciliter la restauration.

rsnapshot crée l’illusion de plusieurs sauvegardes complètes, alors qu’il n’y a sur le système de fichier que la première et les différences éventuelles apparues entre cette dernière et les suivantes. Il s’agit d’une méthode de sauvegarde différentielle.

Dans ce tuto nous allons vous expliquer comment mettre en place la sauvegarde différentielle sécurisée d’un répertoire d’une machine distante.

Prérequis: Configuration de SSH et des clefs

Vous devez pouvoir vous connecter aux machines auxquelles vous allez vous connecter sans mot de passe:

Tout d’abord, il faut configurer ssh et importer la clef du serveur distant.

Je vais procéder comme dans ce tuto

Testez:

root@nas:~# ssh -p 10122 vanille
Linux vanille.zehome.org 2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC 2011 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Aug 11 08:07:48 2011 from nas.zehome.org
root@vanille:~#

Installation de rsnapshot

root@nas:~# apt-get install rsnapshot

Qui vous installera par la même occasion Rsync

Sauvegarde du fichier de configuration:

root@nas:~# cp /etc/rsnapshot.conf /etc/rsnapshot.conf.sos

Lire la suite…