Archive

Archives pour la catégorie ‘Système’

Aria2 – L’outil de téléchargement ultra rapide

10/01/2024 Comments off

Source: korben.info

Les barbus du Linux le connaissent depuis longtemps, mais pour les autres, je vous explique rapidement. Tout d’abord Aria2 est sous licence GPL et tourne sans aucun souci sous Mac, Linux, Windows et même Android. L’outil est léger (il consomme peu de CPU et de mémoire), mais surtout très puissant puisqu’il gère en plus des liens classiques, les Metalink, ce qui lui permet de télécharger des fichiers en exploitant plusieurs connexions simultanées utilisant plusieurs sources et/ou protocoles différents.

Oui c’est fou ! Cela permet d’exploiter au maximum la bande passante que vous avez à votre disposition.

Aria2 supporte donc le HTTP, FTP, mais aussi Bittorrent avec toutes ses fonctionnalités (DHT, chiffrement PEX, URI magnétique, Web-Seeding, téléchargement sélectif, tracker UDP et le protocole Local Peer Discovery). Aria2 est aussi pilotable à distance (support RPC) et gère sans souci ce qui est droit d’accès, cookies, proxys…etc.

N’allez pas croire que Aria2 est un outil compliqué à prendre en main. Taratata, c’est au contraire très simple à piger, comme vous allez voir dans les exemples ci-dessous :

Pour télécharger simplement un fichier :

aria2c http://example.org/mylinux.iso

Pour télécharger un fichier à partir de 2 sources différentes (HTTP, FTP) :

aria2c http://a/f.iso ftp://b/f.iso

Pour télécharger un fichier à partir de 3 connexions de source différente (Torrent, FTP, HTTP) :

aria2c -Ttest.torrent "http://host1/file" "ftp://host2/file"

Si ça vous branche d’explorer un peu plus ses possibilités, la documentation est ici. Autrement, sachez aussi qu’il existe des interfaces graphiques pour Aria2 comme :

  • webui-aria2: Interface navigateur
  • uGet: Gestionnaire de téléchargement Linux

Aria2 est téléchargeable ici.

Categories: Réseau, Système Tags: , ,

Synchronizing folders with rsync

10/01/2024 Comments off

Source: Juan Valencia’s website

In this post I cover the basics of rsync, in preparation for a subsequent post that will cover backups and it’s use in conjunction with cronjobs to automatize the backup process. From the copying and synchronization of local files and folders, to it’s use for transfer information among computers. Itsuse as a daemon when SSH is unavailable was moved to it’s own section.

Topics
The basics of rsync
Copying local files and folders
Dealing with whitespace and rare characters
Update the contents of a folder
Synchronizing two folders with rsync
Compressing the files while transferring them
Transferring files between two remote systems
Excluding files and directories
Running rsync as a daemon (moved to it’s own section)
Some additional rsync parameters
Footnotes

The basics of rsync

rsync is a very versatile copying and backup tool that is included by default in almost every Linux distribution. It can be used as an advanced copying tool, allowing us to copy files both locally and remotely. It can also be used as a backup tool. It supports the creation of incremental backups.

rsync counts with a famous delta-transfer algorithm that allows us to transfer new files as well as recent changes to existent files, while ignoring unchanged files. Additionally to this, the behavior ofrsync can be throughly customized, helping us to automatize backups, it can also be run as a daemon to turn the computer into a host and allow rsync clients connect to it.

Besides the copying of local files and folders, rsync allow us to copy over SSH (Secure Shell), RSH (Remote Shell) and it can be run as a daemon in a computer and allow other computers to connect to it, when rsync is run as a daemon it listens to the port TCP 873.

When we use rsync as a daemon or when we use RSH, the data that is send between computers travels unencrypted, so, if you are transferring files between two computers in the same local network, this is useful, but this shouldn’t be used to transfer files over insecure networks, such as the Internet. For this purpose SSH is the way to go.

This is the main reason why I favor the use of SSH for my transfers, besides, since SSH is secure, many servers have the SSH daemon available. But the use of rsync as a daemon for transfers over fast connections, as is usually the case in a local network, is useful. I don’t have the RSH daemon running in my computers so you may find me a bit biased about SSH in the examples. The examples covering the transfer of files between two computers use SSH as the medium of transport, but in a separate post I cover the use of rsync as a daemon.

Lire la suite…

Categories: Réseau, Système Tags: , ,

Fixing Mac OSX File Permissions and ACLs From the Command Line

09/01/2024 Comments off

Recently the hard drive in my mac mini running Mac OSX Leopard (10.5) failed. Luckily I had time machine backing it up to an external USB disk. Now, since I had to replace the drive and rebuild my system anyway I figured, why not upgrade to Snow Leopard? Planning to just pull what I needed off the backup drive manually I went ahead with the upgrade. There aren’t too many files on this machine that I depend on. Just some ssh keys, gpg keys and random documents scattered about here and there. So I upgraded, installed my apps and copied my files from the backup. Everything was going smoothly until I tried to actually write to one of the files I copied from the backup drive. This is when I started getting permission errors.

Here’s what happened when I tried to update my ssh known_hosts file:

airbag:~ keith$ echo foo > .ssh/known_hosts 
-bash: .ssh/known_hosts: Permission denied

Huh? But I own this file…dont I?

airbag:~ keith$ id
uid=501(keith) gid=20(staff) groups=20(staff),402(com.apple.sharepoint.group.1),204(_developer),100(_lpoperator),98(_lpadmin),81(_appserveradm),80(admin),79(_appserverusr),61(localaccounts),12(everyone),401(com.apple.access_screensharing)

airbag:~ keith$ ls -al .ssh/known_hosts 
-rw-r--r--@ 1 keith 502 56140 Mar 25 2009 .ssh/known_hosts
I do own it… And so began much head scratching and man page reading.

Well, as it turns out I forgot to look at the file ACLs…

airbag:~ keith$ ls -le .ssh/known_hosts 
-rw-r--r--@ 1 keith 502 56140 Mar 25 2009 .ssh/known_hosts
 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown

Well no wonder, the ACL is set to deny write,delete,append,writeattr,writeextattr and chown from everyone! Let’s get rid of that.

airbag:~ keith$ sudo chmod -N .ssh/known_hosts 
Password:

That ought to do it. The -N flag says get rid of all the ACL info on the file. You could also update this to be just right for your user or group but I’d rather use only the standard unix permissions.

airbag:~ keith$ ls -le .ssh/known_hosts 
-rw-r--r--@ 1 keith 502 56140 Mar 25 2009 .ssh/known_hosts

Seems to have removed all ACLs from the file. I wonder if we can write to it now…

airbag:~ keith$ echo foo >> .ssh/known_hosts 
airbag:~ keith$

And there you have it, the file is writable once again. Now its time to get some real work done!

Categories: Système Tags: , , , ,

How to list the crontabs for all users?

09/01/2024 Comments off

I ended up writing a script (I’m trying to teach myself the finer points of bash scripting, so that’s why you don’t see something like Perl here). It’s not exactly a simple affair, but it does most of what I need. It uses Kyle’s suggestion for looking up individual users’ crontabs, but also deals with /etc/crontab(including the scripts launched by run-parts in /etc/cron.hourly/etc/cron.daily, etc.) and the jobs in the /etc/cron.d directory.source: How do I list all cron jobs for all users?

I ended up writing a script (I’m trying to teach myself the finer points of bash scripting, so that’s why you don’t see something like Perl here). It’s not exactly a simple affair, but it does most of what I need. It uses Kyle’s suggestion for looking up individual users’ crontabs, but also deals with /etc/crontab(including the scripts launched by run-parts in /etc/cron.hourly/etc/cron.daily, etc.) and the jobs in the /etc/cron.d directory.

It takes all of those and merges them into a display something like the following:

#!/bin/bash
# System-wide crontab file and cron job directory. Change these for your system.
 CRONTAB='/etc/crontab'
 CRONDIR='/etc/cron.d'
# Single tab character. Annoyingly necessary.
 tab=$(echo -en "\t")
# Given a stream of crontab lines, exclude non-cron job lines, replace
 # whitespace characters with a single space, and remove any spaces from the
 # beginning of each line.
 function clean_cron_lines() {
 while read line ; do
 echo "${line}" |
 egrep --invert-match '^($|\s*#|\s*[[:alnum:]_]+=)' |
 sed --regexp-extended "s/\s+/ /g" |
 sed --regexp-extended "s/^ //"
 done;
 }
# Given a stream of cleaned crontab lines, echo any that don't include the
 # run-parts command, and for those that do, show each job file in the run-parts
 # directory as if it were scheduled explicitly.
 function lookup_run_parts() {
 while read line ; do
 match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
if [[ -z "${match}" ]] ; then
 echo "${line}"
 else
 cron_fields=$(echo "${line}" | cut -f1-6 -d' ')
 cron_job_dir=$(echo "${match}" | awk '{print $NF}')
if [[ -d "${cron_job_dir}" ]] ; then
 for cron_job_file in "${cron_job_dir}"/* ; do # */
 [[ -f "${cron_job_file}" ]] && echo "${cron_fields} ${cron_job_file}"
 done
 fi
 fi
 done;
 }
# Temporary file for crontab lines.
 temp=$(mktemp) || exit 1
# Add all of the jobs from the system-wide crontab file.
 cat "${CRONTAB}" | clean_cron_lines | lookup_run_parts >"${temp}"
# Add all of the jobs from the system-wide cron directory.
 cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */
# Add each user's crontab (if it exists). Insert the user's name between the
 # five time fields and the command.
 while read user ; do
 crontab -l -u "${user}" 2>/dev/null |
 clean_cron_lines |
 sed --regexp-extended "s/^((\S+ +){5})(.+)$/\1${user} \3/" >>"${temp}"
done <
# Output the collected crontab lines. Replace the single spaces between the
 # fields with tab characters, sort the lines by hour and minute, insert the
 # header line, and format the results as a table.
 cat "${temp}" |
 sed --regexp-extended "s/^(\S+) +(\S+) +(\S+) +(\S+) +(\S+) +(\S+) +(.*)$/\1\t\2\t\3\t\4\t\5\t\6\t\7/" |
 sort --numeric-sort --field-separator="${tab}" --key=2,1 |
 sed "1i\mi\th\td\tm\tw\tuser\tcommand" |
 column -s"${tab}" -t
rm --force "${temp}"

source: How do I list all cron jobs for all users?

Nmap Reference Guide

08/01/2024 Comments off

NMAP: Host Discovery

nmap reference guideNMAP: One of the very first steps in any network reconnaissance mission is to reduce a (sometimes huge) set of IP ranges into a list of active or interesting hosts. Scanning every port of every single IP address is slow and usually unnecessary.

Of course what makes a host interesting depends greatly on the scan purposes. Network administrators may only be interested in hosts running a certain service, while security auditors may care about every single device with an IP address. An administrator may be comfortable using just an ICMP ping to locate hosts on his internal network, while an external penetration test may use a diverse set of dozens of probes in an attempt to evade firewall restrictions.

Because host discovery needs are so diverse, Nmap offers a wide variety of options for customizing the techniques used. Host discovery is sometimes called ping scan, but it goes well beyond the simple ICMP echo request packets associated with the ubiquitous ping tool. Users can skip the ping step entirely with a list scan (-sL) or by disabling ping (-Pn), or committed the network with arbitrary combinations of multi-port TCP SYN/ACK, UDP, SCTP INIT and ICMP probes.

The goal of these probes is to solicit responses which demonstrate that an IP address is actually active (is being used by a host or network device). On many networks, only a small percentage of IP addresses are active at any given time. This is particularly common with private address space such as 10.0.0.0/8. That network has 16 million IPs, but I have seen it used by companies with less than a thousand machines. Host discovery can find those machines in a sparsely allocated sea of IP addresses.

If no. host discovery options are given, Nmap sends an ICMP echo request, a TCP SYN packet to port 443, a TCP ACK packet to port 80, and an ICMP timestamp request. (For IPv6, the ICMP timestamp request is omitted because it is not part of ICMPv6.) These defaults are equivalent to the – PE - PS443 - PA80 - PP options. The exceptions to this are the ARP (for IPv4) and Neighbor Discovery (for IPv6) scans which are used for any targets on a local ethernet network.

For unprivileged Unix shell users, the default probes are a SYN packet to ports 80 and 443 using the connect system call. This host is often sufficient when local scanning discovery networks, but a more comprehensive set of discovery probes is recommended for security auditing. Lire la suite…

Categories: Système Tags: