Archive

Articles taggués ‘sauvegarde’

Rsync : Sync Files/Directories

21/07/2017 Aucun commentaire

Copy files or directories from one location to an another host by rsync.

If you’d like to set rsync automatically by cron or others, it need to configure like follows because authentication is required without settings. For example, Copy files or directories under the [/root/work] on dlp.srv.world to [/home/backup] on www.srv.world.

[1] Configure on source host.

root@dlp:~# apt-get -y install rsync
root@dlp:~# vi /etc/rsync_exclude.lst
# specify files or directories you'd like to exclude to copy
test
test.txt
[2] Configure on destination host.

root@www:~# apt-get -y install rsync
root@www:~# vi /etc/default/rsync
# line 8: change
RSYNC_ENABLE=true
root@www:~# vi /etc/rsyncd.conf
# create new
# any name you like
[backup]
# destination directory to copy
path = /home/backup
# hosts you allow to access
hosts allow = 10.0.0.30
hosts deny = *
list = true
uid = root
gid = root
read only = false
root@www:~# mkdir /home/backup
root@www:~# systemctl start rsync
[3] It’s OK. Execute rsync on Source Host like follows.

root@dlp:~# rsync -avz --delete --exclude-from=/etc/rsync_exclude.lst /root/work/ www.srv.world::backup
# Add in cron if you'd like to run reguraly
root@dlp:~# crontab -e
# for example, run at 2:00 AM in a day
00 02 * * * rsync -avz --delete --exclude-from=/etc/rsync_exclude.lst /root/work/ www.srv.world::backup
Categories: Système Tags: , , ,

Better than Time Machine: backup your Mac with rsync

08/07/2016 Comments off

Backup your Mac with rsync

Using rsync: We all know we need to make backups. Apparently, 30% of all computer users lose all of their files sometime in their life. Not a pretty foresight.

Fortunately, Mac Leopard users have a program called Time Machine that makes things a lot easier. But is Time Machine the perfect backup solution? I don’t think so. There are a couple of things that make Time Machine very unsuitable for me:

  • You need to get a seperate external hard drive that can only be used for Time Machine (and has to be formatted first)
  • That drive has to be formatted in HFS+, hence, without any (commercial) third-party plugins it’s not readable on Windows or Linux systems
  • You have to leave your drive on all the time to make sure Time Machine makes backups
  • You can’t make a list of things you want to have backed up, you can only exclude folders from your complete hard disk
  • Time Machine makes an exact copy of your hard drive

time-machine_2_thumb800Especially that last ‘feature’ is very irritating to me. I have an external drive with about 300G of files, including lots of music and video files. My MacBook drive is only 80GB big, so i can never have the complete contents of my external drive on my MacBook. Let’s say i have 10GB of MP3 files, which i backup with Time Machine, then i remove about 5GB of files from my MacBook to free some space. What happens when the next backup round is happening? Exactly, the 5GB of files get deleted from the external disk as well. When i want to play a certain MP3 file from my external drive i now have to ‘restore’ and ‘look back in history’ to find it. Not very user-friendly.

Luckily, there is a very good (free) alternative to Time Machine that does exactly what i want with backups: it lets you specify which folders you want to backup, it doesn’t delete things on the backup drive when you delete files from your original drive, and it’s compatible with any external drive and can even backup files over a network. This piece of software is called rsync. Here’s how to use it.

rsync is a command-line utility shipped with every copy of Mac OS X. It originated from the UNIX/Linux world, where it has been part of most Linux distributions for many years. rsync is reliable, fast, and easily configurable. Try running it by opening up the Terminal.app (located in your Applications/Utilities folder) and running the command:

rsync

You’ll get an overview of all possible options. In essence the syntax is very simple:

rsync OPTIONS SOURCE DESTINATION

What you’ll probably want is a one-way transfer of all files in SOURCE to DESTINATION, where only files are copied that are not available on the DESTINATION disk or different. Aside from that you’ll want to include all subdirectories, links, permissions, date/time, groups, owner and devices. To do that simply use this easy-to-remember option list:

rsync -rlptgoD

Ha, just kidding! Fortunately there is another switch that does all of that with one switch, namely the archive switch:

rsync -a

So, let’s say you want to backup the files in your Documents directory to your external harddrive, which you appropriately named ‘backup’, then this would be the command:

rsync -a ~/Documents/ /Volumes/backup/Documents

For those of you who don’t use the Terminal very often: the tilde (~) is a shortcut for your home directory. If, for example, your name would be ‘Alice’ your home directory would probably be

/Users/alice

Lire la suite…

Do-It-Yourself Backup System Using Rsync

08/07/2016 Comments off

What is rsync?

rsync-terminalRsync is a program for synchronizing two directory trees across different file systems even if they are on different computers. It can run its host to host communications over ssh to keep things secure and to provide key based authentication. If a file is already present in the target and is the same as on the source the file will not be transmitted. If the file on the target is different than the one on the source then only the parts of it that are different are transferred. These features greatly increase the performance of rsync over a network.

What are hard links?

Hard links are similar to symlinks. They are normally created using the ln command but without the -s switch. A hard link is when two file entries point to the same inode and disk blocks. Unlike symlinks there isn’t a file and a pointer to the file but rather two links to the same file. If you delete either entry the other will remain and will still contain the data. Here is an example of both:

  ------------- Symbolic Link Demo -------
  % echo foo > x
  % ln -s x y
  % ls -li ?
  38062 -rw-r--r--  1 kmk users 4 Jul 25 14:28 x
  38066 lrwxrwxrwx  1 kmk users 1 Jul 25 14:28 y -> x
  -- As you can see, y is only a pointer to x.
  % grep . ?
  x:foo
  y:foo
  -- They contain the same data.
  % rm x
  % ls -li ?
  38066 lrwxrwxrwx  1 kmk users 1 Jul 25 14:28 y -> x
  % grep . ?
  grep: y: No such file or directory
  -- Now that x is gone y is simply broken.
  ------------ Hard Link Demo ------------
  % echo foo > x
  % ln x y
  % ls -li ?
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:28 x
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:28 y
  -- They are the same file occupying the same disk space.
  % grep . ?
  x:foo
  y:foo
  -- They contain the same data.
  % rm x
  % ls -li ?
  38062 -rw-r--r--  1 kmk users 4 Jul 25 14:28 y
  % grep . ?
  y:foo
  -- Now y is simply an ordinary file.
  ---------- Breaking a Hard Link ----------
  % echo foo > x
  % ln x y
  % ls -li ?
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:34 x
  38062 -rw-r--r--  2 kmk users 4 Jul 25 14:34 y
  % grep . ?
  x:foo
  y:foo
  % rm y ; echo bar > y
  % ls -li ?
  38062 -rw-r--r--  1 kmk users 4 Jul 25 14:34 x
  38066 -rw-r--r--  1 kmk users 4 Jul 25 14:34 y
  % grep . ?
  x:foo
  y:bar

Why backup with rsync instead of something else?

  • Disk based: Rsync is a disk based backup system. It doesn’t use tapes which are too slow to backup (and more importantly restore) modern systems with large hard drives. Also, disk based backup solutions are much cheaper than equivalently sized tape backup systems.
  • Fast: Rsync only backs up what has changed since the last backup. It NEVER has to repeat the full backup unlike most other systems that have monthly/weekly/daily differential configurations.
  • Less work for the backup client: Most of the work in rsync backups including the rotation process is done on the backup server which is usually dedicated to doing backups. This means that the client system being backed up is not hit with as much load as with some other backup programs. The load can also be tailored to your particular needs through several rsync options and backup system design decisions.
  • Fastest restores possible: If you just need to restore a single file or set of files it is as simple as a cp or scp command. Restoring an entire file system is just a reverse of the backup procedure. Restoring an entire system is a bit long but is less work than backup systems that require you to reinstall your OS first and about the same as other manual backup systems like dump or tar.
  • Only one restore needed: Even though each backup is an incremental they are all accessible as full backups. This means you only restore the backup you want instead of restoring a full and an incremental or a monthly followed by a weekly followed by a daily.
  • Cross Platform: Rsync can backup and recover anything that can run rsync. I have used it to backup Linux, Windows, DOS, OpenBSD, Solaris, and even ancient SunOS 4 systems. The only limitation is that the file system that the backups are stored on must support all of the file metadata that the file systems containing files to be backed up supports. In other words if you were to use a vfat file system for your backups you would not be able to preserve file ownership when backing up an ext3 file system. If this is a problem for you try looking into rdiff-backup.
  • Cheap: It doesn’t seem like it would be cheap to have enough disk space for 2 copies of everything and then some but it is. With tape drives you have to choose between a cheap drive with expensive tapes or an expensive drive with cheap tapes. In a hard drive based system you just buy cheap hard drives and use RAID to tie them together. My current backup server uses two 500GB IDE drives in a software RAID-0 configuration for a total of 1TB for about $100 which is about 1/6th what I paid for the DDS3 tape drive that I used to use and that doesn’t even include the tapes that cost about $10/12GB.
  • Internet: Since rsync can run over ssh and only transfers what has changed it is perfect for backing up things across the internet. This is perfect for backing up and updating a web site at a web hosting company or even a co-located server. Internet based backup systems are also becoming more and more popular. Rsync is the perfect tool to backup to such services over the internet.
  • Do-it-yourself: There are FOSS backup packages out now that use rsync as their back end but the nice thing here is that you are using standard command line tools (rsync, ssh, rm) so you can engineer your own backup system that will do EXACTLY what you want and you don’t need a special tool to restore.

Lire la suite…

How to enable incremental file sync for many users on Linux

06/07/2016 Comments off

Suppose you as a software developer has set up daily builds of your software for testing purposes. Every day you make a new build, users have to re-download the updated build to evaluate it. In this case you may want to enable differential downloads, so that users can download only difference between two builds, thereby saving on the server’s bandwidth. Users will also be happy as they don’t have to wait to re-download the whole thing. Similar situations are encountered when you want to set up a download archive which allows incremental sync for users.

In these cases, how would you distribute incrementally updated files efficiently for multiple users? In fact, there are open-source storage solutions that come with « delta sync » capability built-in, such as ownCloud or Syncthing. These kinds of full-blown solutions with built-in GUI require users to install a dedicated client, and thus may be an overkill for simple file distribution that you are looking for.

Barring full-blown third-party software like these, perhaps rsync may come to mind, which can do bandwidth-efficient file sync. The problem with rsync, however, is that all the heavy duty computations is done at the server side. That is, when a client requests for a file sync, the server needs to perform block-by-block checksum computation and search for blocks not available on the client. Obviously this procedure can place a significant strain on the server’s resources if the server needs to handle many users, and thus is not scalable.

What is Zsync?

This is when a command-line tool called zsync comes in handy. While zsync uses the same delta-encoding based sync algorithm as rsync does, it moves the heavy duty computation away from the server and onto the clients. What do I mean by that?

Well, in zsync, the server maintains a separate .zsync metadata file for a file to distribute, which contains a list of « precomputed » checksums for individual blocks of the file. When zsync client requests for a file sync, the client downloads .zsync metadata file first, and performs block-by-block checksum calculation to find missing blocks on its own. The client then requests for missing blocks using HTTP range requests. As you can see, the server is totally out of the loop from the sync algorithm, and simply serves requested file blocks over HTTP, which makes it ideal when incremental file sync is needed for many users.

Here is a quick rundown on the server-side overhead difference between rsync and zsync. In the plot below, I compare rsync and zsync in terms of the server’s CPU usage when 200 users are downloading a tarball file with 2.5% discrepancy of a previous version. For fair comparison, SSH is not used for rsync.

With zsync, since all checksum computation overhead has shifted from the server to individual clients, the server overhead is reduced dramatically. A small neat idea makes zsync a real winner!

In the rest of the tutorial, I will show how to distribute a file incrementally using zsync under the Linux environment.

Zsync: Client Side Setup

On the client side, you need to install zsync to initiate file transfer from a remote web server. zsync is extremely lightweight, and is included in the most Linux distros. Here is how to install zsync on various distros.

Debian, Ubuntu or Linux Mint:

$ sudo apt-get install zsync

Fedora:

$ sudo yum install zsync

CentOS or RHEL:

First, enable Repoforge repository, and then run:

$ sudo yum install zsync

Arch Linux:

$ sudo pacman -S zsync

Zsync: Server Side Setup

On the server side, pretty much all you need an HTTP/1.1-capable web server. Any commodity web server will do. For each file you want to distribute with zsync, you then need to prepare a .zsync metadata file.

You can create a .zsync file using zsyncmake command-line tool which comes with zsync package. Below are the steps to create a .zsync metadata file. Follow these steps on any Linux computer where zsync is installed.

Assume that you want to distribute a file custom.tar.gz which is already compressed with gzip.

First, uncompress custom.tar.gz. If the file is not compressed, you can skip this step.

$ gunzip custom.tar.gz

Next, compress the file with zsyncmake with « -z » option.

$ zsyncmake -z custom.tar

It will compress the input file in gzip format (custom.tar.gz), and create a corresponding .zsync metadata file (custom.tar.zsync).

The reason why we compress it with zsyncmake, not with gzip is that zsyncmake-compression makes a compressed file incrementally updatable.

Finally, upload custom.tar.gz and custom.tar.zsync to a host where a web server is running. Make sure that upload both files to the same directory.

Incrementally Downloads a File with Zsync

Once both the server and the client are ready, you can go ahead and run zsync on the client as follows.

$ zsync http://192.168.1.239/custom.tar.zsync

Note that you should point to the URL of the .zsync metadata file, not the tarball to download.

The first time download will fetch the whole tarball from the server.

Over time if the remote tarball gets updated, you can use the same command to re-download the updated file.zsync will automatically detect the change, and perform delta download.

Obviously you need to re-run zsync in the same directory where the previously downloaded tarball resides. The old tarball will be renamed (with .zs-old extension), and the newer tarball will be constructed in the current directory.

If the previously downloaded tarball is somewhere else, you can use « -i » option to specify the location explicitly.

$ zsync -i /backup/custom.tar http://192.168.1.239/custom.tar.zsync

Conclusion

In this tutorial I show how to use zsync to download a remote file incremently, while putting little overhead on the server. There are places where zsync is already used (e.g., Ubuntu daily ISO images or Debian live ISO images). Now you know what to do when you see .zsync files.

 

Source: Xmodulo

Categories: Logiciel Tags: , , ,

Five reliable Windows Server backup solutions

06/07/2016 Comments off

When disaster strikes (and it will), you can put the pieces back together if you’ve got a solid backup solution in place.

If you have a Windows server and you’re not backing it up, you are tempting fate. It’s only a matter of time before something tragic will happen: a hardware failure, a hacker, or a corrupted Windows update. Something will bring that server down and do everything it can to take your data with it.

Luckily, several reliable backup solutions are available for the Windows Server environment. They aren’t perfect. But the five solutions discussed here have their strengths and can be depended upon to keep your data safe. Now, before I reveal the big five, I want to make sure you understand how critical it is to be backing up that data to an external drive and to rotate that drive offsite. If you’re backing up drives and leaving everything onsite, should a fire (or some other catastrophe) occur, that data will be lost.

Note: This list is also available as a photo gallery.

1: Acronis Backup and Restore (with Universal Restore)

Acronis Backup and Restore (Figure A) is the backup tool I use the most. It’s costly but well worth it. The newer version of Acronis does a great job of ensuring that there is room on your external drive for your latest backup. You can overwrite the old backup or go by the traditional cleanup rules established in Acronis Echo. But if you go the Acronis route, you should also purchase the Universal Restore feature. With this tool, you can take a full image of a backup and restore it to any hardware — so long as it meets the requirements of your platform. That way, you don’t need an exact duplicate of your server hardware sitting around waiting to have an image restored onto it. Acronis also has add-ons for Exchange and SQL Servers.

Figure A

 

Acronis backup
Acronis Backup and Restore

2: Symantec Backup Exec

Symantec Backup Exec (Figure B) is a powerful backup solution that does a great job of « just working. » And with its at-a-glance calendar, it’s easy to know when your backup has succeeded and when it has worked. If you’re using Symantec Backup Exec across a network (to back up clients/hosts), you can feel safe because it uses SSL for all network backup connections. Like Acronis, Symantec offers solutions for Exchange and SQL. Also, the ability to manage Vaults (backup destinations) is superb with Symantec. Just make sure that if a Vault is physically (and permanently) removed, you remove it from within the backup software. Otherwise, it will cause problems.

Figure B

Symantec Backup Exec

Symantec Backup Exec

3: Carbonite

Carbonite (Figure C) is primarily an online backup (although the Business and BusinessPremier plans allow you to back up to an external drive). It’s meant to back up data only. Carbonite is probably one of the most reliable online backup tools and can back up email, documents, POS files, financial data, pictures and other multimedia, and more. But to use Carbonite for Windows Server, you must use the BusinessPremier Plan ($229.00 per year), which will allow you to include as many servers running Window Server 2003/2008/2011 as you like. Unlike Acronis, you won’t be backing up an image of your server with Carbonite — it’s a data-only backup tool. But it’s an excellent addition to any onsite backup solution.

Figure C

Carbonite

Carbonite

4: Windows NT Backup

Windows NT Backup (Figure D) is the free backup solution that comes with any Windows Server. Why do I include it here? Although it just does data backup and doesn’t offer many of the bells and whistles of the other tools, it is one of the only truly reliable backup solutions that will clear the Exchange logs for you. I always set up an NT Backup to run once a week (or month, depending upon the needs), just to have it clear the Exchange logs. Otherwise, you’ll be including a lot of unnecessary data in your backups, causing their size to get out of hand.

Figure D

Windows NT Backup

Windows NT Backup

5: ShadowProtect 4 (Server)

ShadowProtect 4 (Figure E) is probably the least-known product in this list. But it falls into place next to Acronis as a feature-rich tool that can do everything — including recover to dissimilar hardware. ShadowProtect 4 also includes an easy-to-use tool that allows you to consolidate backup image files and convert the backups to either .VHD or .VMDK virtual machines. Another outstanding feature is its granular recovery, which allows for the quick recovery of individual files from within a backup. Unique to ShadowProtect is the Virtual Boot technology, which provides for automatic failover to a virtual server.

Figure E

ShadowProtect

ShadowProtect 4

Which backup?

Every company needs a backup plan. How far you go with your backup depends upon the size of the company, your needs, and your budget. But any of the above solutions will serve as a great place to start — or stop — for your Windows Server backup needs. Do you have a different backup solution to add to the list? Share your recommendations with fellow TechRepublic members.

 
Source: Jack Wallen – TechRepublic
Categories: Logiciel Tags: , ,

Simple versioned TimeMachine-like backup using rsync

07/03/2016 Comments off

time machine rsyncUsing TimeMachine with rsync: Over many years, I have dealt with scripts that do backup versioning, i.e., maintain multiple backups. Due to their flexibility, they have been complex to understand and configure. Here is a simple rsync-based tool with a different focus: The experienced systems administrator who wants to keep his system’s complexity down.

Backup in action: TimeMachine and rsync

It consists of a simple script, which you can call rsync-backup.sh and store wherever you like, e.g., in /usr/local/sbin. I will use these names and paths in the examples.

#!/bin/sh
# Usage: rsync-backup.sh <src> <dst> <label>
if [ "$#" -ne 3 ]; then
    echo "$0: Expected 3 arguments, received $#: $@" >&2
    exit 1
fi
if [ -d "$2/__prev/" ]; then
    rsync -a --delete --link-dest="$2/__prev/" "$1" "$2/$3"
else
    rsync -a                                   "$1" "$2/$3"
fi
rm -f "$2/__prev"
ln -s "$3" "$2/__prev"

During normal operation, it boils down to three simple statements:

  1. rsync with --link-dest: Copying the contents of <src> to <dst>/<label>, reusing the files from the previous backup with hard links ((The non---link-dest rsync does not use --delete to reduce the risk of accidentally deleting files when called with wrong parameters))
  2. rm and ln: Remember this backup location for the next incremental backup.

Voilà – it doesn’t get much easier than that!

Of course, there is something missing: The actual backup policy. It is separated into cron, which I consider an advantage. Using this separation of duties, many policies can be implemented very easily and composed in a modular way:

Create daily backups  for every weekday

You might know this from automysqlbackup or autopostgresqlbackup: A backup is created every day and overwritten after 7 days. This is achieved by adding the following file to /etc/cron.daily/:

#!/bin/sh
/usr/local/bin/rsync-backup.sh /home /data/backup `date +%A`

All your user’s files are copied daily to /data/backup, named after the current day, overwritten weekly.

Daily backups for a month

Sure, this is easy as well, by putting this with a descriptive name into/etc/cron.daily/:

#!/bin/sh
/usr/local/bin/rsync-backup.sh /home /data/backup `date +Day-%d`

Lire la suite…

Categories: Système Tags: , , , ,

Sauvegarde MySQL

03/02/2016 Comments off

sauvegarde mysqlSauvegarde MySQL

Pour sauvegarder une base de données (sans et avec compression) :

# mysqldump NOM_BASE > NOM_FICHIER
# mysqldump NOM_BASE | gzip > NOM_FICHIER

Pour restaurer une base de données (sans et avec compression) :

# mysqladmin create NOM_BASE
# mysql NOM_BASE < NOM_FICHIER
# gunzip < NOM_FICHIER | mysql NOM_BASE

Sauvegarder toutes les bases :

# mysqldump --opt --all-databases > NOM_FICHIER

Pour sauvegarder uniquement certaines tables :

# mysqldump NOM_BASE NOM_TABLE0 [NOM_TABLE1...] > NOM_FICHIER

Pour presque faire un « –exclude » (qui manque cruellement à mysqldump):

mysql -B -N -e 'show databases' | 
  perl -ne 'print unless /b(?:phpmyadmin|mysql|information_schema)b/' | 
  xargs echo mysqldump -B

Et pour sauvegarder des tables correspondant à un motif (préfixe le plus souvent) :

# mysqldump NOM_BASE $(mysql NOM_BASE -B --column-names=False -e "show tables like 'exemple_%'") > NOM_FICHIER

Pour dumper avec une condition particulière :

mysqldump -t <base> <table> --where="my_id='66666666'"

Ce qui permet de réinjecter des données résultantes d’un SELECT * FROM base.table WHERE my_id='66666666'.

Il est évidement possible de faire toutes ces opération sur une instance en précisant son port avec l’option –port (valable pour mysqldump et mysql).

Pour obtenir une liste des utilisateurs mysql, on peut utiliser cette fonction (glanée sur serverfault) :

mygrants()
{
  mysql -B -N -e "SELECT DISTINCT CONCAT(
    'SHOW GRANTS FOR ''', user, '''@''', host, ''';'
    ) AS query FROM mysql.user" | 
  mysql | 
  sed 's/(GRANT .*)/1;/;s/^(Grants for .*)/## 1 ##/;/##/{x;p;x;}'
}

Lire la suite…

HowTo: Make Ubuntu a Perfect Mac File Server and Time Machine Volume

13/01/2016 Comments off

ubuntu time machineFor quite some time I use my Ubuntu machine as a file and backup server for all Macs in my network which is perfectly accessible from the Finder in Mac OS X. There are some instructions available in the web for this task but all failed in my case so I wrote my own tutorial with all the steps needed for it to work properly.

So here’s my little Tutorial for connecting Mac OS X Leopard with Ubuntu and using your Ubuntu machine as a backup volume for Time Machine but all steps can be reproduced on every Linux box and they work with Mac OS X 10.4 Tiger too. At the end of this tutorial you will have a server which shows up in the Finder sidebar and behaves just like a Mac server when accessing it from your Macs. To be perfectly integrated with Mac OS X we’re going to use Apple’s Filing Protocol (AFP) for network and file sharing.

Although this Tutorial involves using the Terminal in Ubuntu and looks a bit geeky it’s very easy even for beginners. I have tried to explain all steps and Terminal commands so you may learn a bit about the Terminal too. At the end of the article you can download my Server Displays icon pack quickly made by me with custom icons for a Mac, Ubuntu and Windows server.

Personally I use a fresh installation of Ubuntu 8.04 Hardy Heron Desktop version (32bit on one machine, 64bit on the other) and Mac OS X Leopard (10.5.3 and later) to connect to them. On my Ubuntu boxes there’s no other file sharing protocol like samba (Windows sharing) or NFS activated.

Rumors are Apple will add some undocumented AFP commands with the Mac OS X 10.5.6 update which therefor won’t be supported by the current Netatalk package (and maybe never will). So be sure to check the latest comments on this article when the 10.5.6 update is out to see if this rumor is true and if there are problems caused by that.

Here are the steps involved in setting up your Ubuntu box as a Mac file server:

  1. Modify and install Netatalk (Open Source AFP implementation)
  2. Configure Netatalk
  3. Configure shared volumes (and Time Machine volume)
  4. Install Avahi (Open Source Bonjour implementation)
  5. Configure Avahi and advertise services
  6. Configure TimeMachine
  7. Conclusion, Problems and more informations
  8. Downloading and using the Server Display Icons
  9. Translations Of This Article

Lire la suite…

Categories: Système Tags: , , ,

Rsnapshot

30/12/2015 Comments off

Introduction

Vous le savez maintenant, les sauvegardes sont indispensables… Sauvegardes. Nécessaires, mais facile à oublier, sauf si elles sont effectuées automatiquement.

Voici un tutorial qui décrit la procédure pour mettre en place une solution de sauvegarde automatique simple basée sur rsnapshot.

Rsnapshot est un script écrit en perl.

Il utilise Rsync (et ssh si vous le souhaitez) pour effectuer des sauvegardes à intervalle régulier.

Il est capable de réaliser des sauvegardes d’un systèmes de fichier ou bien de bases de données par l’intermédiaire de scripts.

Un des principaux avantages de rsnapshot est son extrême simplicité.

rsnapshot utilise les « hard link unix » pour :

  • Éviter de dupliquer inutilement les fichiers.
  • Faciliter la restauration.

rsnapshot crée l’illusion de plusieurs sauvegardes complètes, alors qu’il n’y a sur le système de fichier que la première et les différences éventuelles apparues entre cette dernière et les suivantes. Il s’agit d’une méthode de sauvegarde différentielle.

Dans ce tuto nous allons vous expliquer comment mettre en place la sauvegarde différentielle sécurisée d’un répertoire d’une machine distante.

Prérequis: Configuration de SSH et des clefs

Vous devez pouvoir vous connecter aux machines auxquelles vous allez vous connecter sans mot de passe:

Tout d’abord, il faut configurer ssh et importer la clef du serveur distant.

Je vais procéder comme dans ce tuto

Testez:

root@nas:~# ssh -p 10122 vanille
Linux vanille.zehome.org 2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC 2011 x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Aug 11 08:07:48 2011 from nas.zehome.org
root@vanille:~#

Installation de rsnapshot

root@nas:~# apt-get install rsnapshot

Qui vous installera par la même occasion Rsync

Sauvegarde du fichier de configuration:

root@nas:~# cp /etc/rsnapshot.conf /etc/rsnapshot.conf.sos

Lire la suite…

Sauvegarde journalisée de votre machine avec RSnapShot

30/12/2015 Comments off

Source: Nicolargo

Ce logiciel libre permet d’automatiser une sauvegarde journalisée de vos machines et ceci localement comme à distance.

RSnapShot est disponible dans la plupart des dépôts des distributions GNU/Linux et *BSD. Nous allons illustrer cet article avec une installation et une configuration de RSnapShot sur un serveur Gandi sous Ubuntu 10.04 LTS.

Installation de RSnapShot

On utilise la version disponible dans les dépôts officiels:

sudo aptitude install rsnapshot

Configuration de RSnapShot

L’ensemble de la configuration est centralisé dans le fichier /etc/rsnapshot.conf.

sudo vi /etc/rsnapshot.conf

Attention à la syntaxe dans ce fichier, RSnapShot est assez pointilleux. Il veut que tout les répertoires finissent par un / et des tabulations entre chaque variables.

Les variables importantes à configurer sont les suivantes:

snapshot_root   /.snapshots/

La variable snapshot_root permet de configurer le répertoire racine ou les sauvegardes seront stockées. Ce répertoire peut être sur le même disque que le disque système (c’est le cas de la configuration par défaut avec l’utilisation du répertoire /.snapshots/).

Je vous conseille pour ma part d’utiliser si possible un répertoire stocké sur un deuxième disque physique. Par exemple sur mon serveur Gandi, j’ai un deuxième disque qui est monté sur le répertoire /srv/backup. Je vais donc configurer la variable à /srv/backup/snapshots/ (noter le / à la fin du répertoire !).

Exemple: snapshot_root /srv/backup/snapshots/

cmd_ssh /path/to/ssh

Si vous voulez utiliser les fonctions de sauvegarde de serveur distant (en utilisant le protocole SSH), il faut dé-commenter la ligne précédente. Si vous avez besoin de passer des arguments spécifique à SSH, il faudra compléter la ligne ssh_args.

Exemple: cmd_ssh /usr/bin/ssh

interval monthly 3

Activation de la sauvegarde mensuelle (désactivé par défaut).

On passe ensuite aux variables permettant de configurer ce que l’on veut sauvegarder.

Sauvegardes locales

On parle ici d’une sauvegarde journalisée de répertoires de la machine ou RSnapShot est installé.

backup /home/ localhost/

Le répertoire /home/ sera sauvegardé dans le sous répertoire $snapshot_root/localhost/.

Exemple:

backup /home/ localhost/
backup /etc/ localhost/
backup /var/svn/ localhost/

Lire la suite…