Archive

Archives pour la catégorie ‘Système’

Sécuriser son serveur Linux

12/04/2019 Comments off

Sécuriser Linux

securiser linuxCe guide va vous apprendre à sécuriser un serveur et donc vous initier aux thématiques de la sécurité informatique. En quoi est-ce important ? Par définition, un serveur est ouvert sur le monde, un minimum de sécurité est donc intéressant afin de se prémunir des attaques les plus simplistes.

La marche à suivre sera donc la suivante :

  • présentation des failles ;

  • présentation des outils pour y pallier.

Bien entendu, je ne fais pas un cours complet de sécurité informatique, ce tutoriel est une initiation. Pour faire simple, c’est un peu comme fermer les volets la nuit.

Point de vue matériel, voilà ce qu’il vous faudra :

  • un serveur embarquant une distribution xBuntu ou Debian (pour les autres distribs, le principe sera le même, mais les commandes risquent d’être différentes) ;

  • un accès root (en ssh par exemple) et une console.

Tout le tutoriel se passe en ligne de commande afin de pouvoir être accessible à tous. Comme il faudra éditer des fichiers, veuillez vous assurer d’avoir un éditeur de fichier en ligne de commande (j’utiliserai personnellement nano qui est très simple).

C’est parti !

Filtrer le trafic via le firewall

Présentation

Le firewall (pare-feu en français) est l’élément indispensable pour sécuriser son serveur. Il va en effet filtrer tout le trafic en n’autorisant que les échanges permis par l’administrateur. Sans firewall correctement réglé, tous les trafics sont plus ou moins permis (c’est-à-dire qu’un attaquant peut faire ce qu’il veut chez vous) et ce genre de faille est détectable par un simple scan de ports.

Or, le noyau Linux offre déjà un pare-feu à l’utilisateur, qu’il est possible de configurer via le logiciel iptables (normalement contenu dans /sbin/iptables). S’il n’est pas installé :

apt-get install iptables

Nous allons maintenant détailler le fonctionnement d’un firewall – relativement simple. Un firewall analyse tout le trafic et vérifie si chaque paquet échangé respecte bien ses règles (critères de filtrage). Donc, il suffit de spécifier de bonnes règles pour interdire tout trafic superflu.

Les critères peuvent être divers (filtrer les ports, les protocoles, les adresses IP, etc). De base, nous allons spécifier nos règles sur les ports. Bien entendu, il faut être le plus strict possible quant au choix des règles ; c’est pourquoi, par défaut, tout firewall se règle en premier lieu en bloquant tout, absolument tout. Ensuite, nous allons « ouvrir » (autoriser le trafic) certains ports que nous voulons utiliser (par exemple pour un serveur web, nous allons ouvrir le port 80 afin que le site web soit accessible).

Déclaration des règles

Filtrage intégral

Suit la marche à suivre pour créer les règles :

1. Créons le script :

nano /etc/init.d/firewall

Et on y écrit : #!/bin/sh

2. On efface les règles précédentes pour partir sur de bonnes bases :

iptables -t filter -F
iptables -t filter -X

3. On bloque par défaut tout le trafic (si vous êtes en ssh, bien entendu, n’exécutez pas encore le script !) :

iptables -t filter -P INPUT DROP
iptables -t filter -P FORWARD DROP
iptables -t filter -P OUTPUT DROP

4. On ne ferme pas les connexions déjà établies :

iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

Nous indiquons avec les paramètres -m et --state de ne pas fermer les connexions qui sont déjà établies.

5. On autorise le loopback (on ne va pas se bloquer nous-mêmes !)

iptables -t filter -A INPUT -i lo -j ACCEPT
iptables -t filter -A OUTPUT -o lo -j ACCEPT

Note : lo signifie localhost (le serveur lui-même).

Tout est bloqué, il ne nous reste plus qu’à ouvrir les ports utilisés.

Lire la suite…

What are useful CLI tools for Linux system admins ?

11/04/2019 Comments off

System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

14853747084_e14cf18e8f_z

This article will present some of the most popular and useful CLI tools recommended for sysadmins in their day to day activities. If you would like to recommend any useful tool which is not listed here, don’t forget to share it in the comment section.

Network Tools

1. ping: Check end-to-end connectivity (RTT delay, jitter, packet loss) of a remote host with ICMP echo/reply. Useful to check system status and reachability.

2. hping: Network scanning and testing tool that can generate ICMP/TCP/UDP ping packets. Often used for advanced port scanning, firewall testing, manual path MTU discovery and fragmentation testing.

3. traceroute: Discover a layer-3 forwarding path from a local host to a remote destination host with TTL-limited ICMP/UDP/TCP probe packets. Useful to troubleshoot network reachability and routing problems.

4. mtr: A variation of traceroute which characterizes per-hop packet loss/jitter with running statistics. Useful to characterize routing path delays.

5. netcat/socat: A swiss army knife of TCP/IP networking, allowing to read/write byte streams over TCP/UDP. Useful to troubleshoot firewall policies and service availability.

6. dig: DNS troubleshooting tool that can generate forward queries, reverse queries, find authoritative name servers, check CNAME, MX and other DNS records. Can be instructed to query a specific DNS server of your choosing.

7. nslookup: Another DNS checking/troubleshooting tool. Works with all DNS queries and records. Can query a particular DNS server.

8. dnsyo: A DNS testing tool which checks DNS propagation by performing DNS lookup from over a number of open resolvers located across 1,500 different networks around the world.

9. lsof: Show information about files (e.g., regular files, pipes or sockets) which are opened by processes. Useful to monitor processes or users in terms of their open network connections or opened files.

10. iftop: A ncurses-based TUI utility that can be used to monitor in real time bandwidth utilization and network connections for individual network interfaces. Useful to keep track of bandwidth hogging applications, users, destinations and ports.

11. netstat: A network statistics utility that can show status information and statistics about open network connections (TCP/UDP ports, IP addresses), routing tables, TX/RX traffic and protocols. Useful for network related diagnosis and performance tuning.

12. tcpdump: A popular packet sniffer tool based on libpcap packet capture library. Can define packet capturing filters in Berkeley Packet Filters format.

13. tshark: Another CLI packet sniffer software with full compatibility with its GUI counterpart, Wireshark. Supports 1,000 protocols and the list is growing. Useful to troubleshoot, analyze and store information on live packets.

14. ip: A versatile CLI networking tool which is part of iproute2 package. Used to check and modifying routing tables, network device state, and IP tunneling settings. Useful to view routing tables, add/remove static routes, configure network interfaces, and otherwise troubleshoot routing issues.

15. ifup/ifdown: Used to bring up or shut down a particular network interface. Often a preferred alternative to restarting the entire network service.

16. autossh: A program which creates an SSH session and automatically restarts the session should it disconnect. Often useful to create a persistent reverse SSH tunnel across restrictive corporate networks.

17. iperf: A network testing tool which measures maximum bi-directional throughput between a pair of hosts by injecting customizable TCP/UDP data streams in between.

18. elinks/lynx: text-based web browsers for CLI-based server environment.

Lire la suite…

What are good open-source log monitoring tools on Linux ?

05/04/2019 Comments off


In an operating system, logs are all about keeping track of events, be it critical system errors, resource usage warnings, transaction history, application status, or user activities. These logs, which are stored as (text or binary) files in the system, are useful for system auditing, debugging and maintenance. However, with so many different system entities generating log files, and even at growing rate, the challenge as a system admin is to how to « consume » these log files effectively.

That’s when log monitoring tools come into the picture, which streamline the often laborious process of collecting, parsing and analyzing log files, as well as alerting system admins for any interesting events. These tools are designed from ground up focused on log monitoring, so they offer many attractive features, such as scalable log aggregation and filtering, human-readable display, event correlation, visual or email notification, flexible log retention policy, and so on.

In this post, I am going to introduce a list of popular open-source log monitoring software for Linux, ranging from simple log file viewers to full-blown log monitoring frameworks.

Log Aggregation and Filtering

Log monitoring would not be possible without efficient and scalable mechanisms to collect and pre-process log files. Tools in this category focus on shipping, collecting, filtering, indexing and storing log files, so that they can be further analyzed and visualized in subsequent monitoring pipelines.

1. rsyslog: an open-source log collector server which can filter and consolidate log data (based on syslog protocol) from different hosts and devices in the network. rsyslog can be configured as a server or a client, where the former plays the role of a log collector and the latter runs as a log sender.

2. syslog-ng: another open-source implementation of the syslog protocol with more advanced and user-friendly features such as content-based filtering, easier-to-understand config format, and real-time event correlation.

3. systemd journal: systemd journal can be configured for remote journal logging, where locally logged events are forwarded to a remote server over HTTP. In this setup, systemd-journal-upload on a client host serializes and forwards journal messages to systemd-journal-remote running on a remote collector server.

4. logstash: an open-source tool that collects, parses, and stores log files for offline search and analysis. logstash can run in various pipelines due to many plugins supporting different input/output interfaces, decoding/encoding, and filtering rules. Input plugins allows logstash to gather log files from different sources and protocols (e.g., files, S3, RabbitMQ, syslog, collectd, TCP/UDP sockets). Filter/codec plugins are used to parse, convert, modify and add metadata to log files. Output plugins pass processed log files to various target storages (e.g., file, Google cloud storage, Nagios, S3, Zabbix).

5. collectd: a daemon service which gathers various system-level statistics, and stores them for historical analysis or real-time graphing. Similar to logstash, collectd is an extensible architecture, where you can enable various input/output plugins to change its collection behaviors. For log collection, collectd can leverage LogFile and Network plugins to aggregate remote log files.

6. Logster: an open-source utility for parsing log files for any interesting data, and aggregating extracted data into metrics for subsequent reporting and graphing pipelines.

7. Fluentd: a unified log aggregation layer which allows in-stream processing for a variety of streaming data and log files. It comes with a huge plugin ecosystem with more than 300 plugins to support various input sources and output interfaces.

8. Nxlog: a unified log collector and forwarder which supports a variety of log sources, formats and protocols. Advanced features include multi-threaded log collection and processing, message buffering and prioritization, built-in log rotation, and TLS/SSL transport.

9. Scribe:: a scalable log collector server developed by Facebook. Scribe can aggregate log data which is streamed in real time from a large number of clients. It uses Apache Thrift for protocol encoding, so its interface is compatible with pretty much any languages. While a proven solution, Scribe is not something you can deploy quickly as a turnkey. Also, note that Scribe is no longer updated and maintained.

10. Flume: a highly scalable and reliable service to transport and collect large volumes of streaming log data from any clients, and store them in backend storage such as Apache Hadoops’ HDFS.

Lire la suite…

16 commands to check hardware information on Linux

05/04/2019 Comments off

Source: BinaryTides

Hardware information

Like for every thing, there are plenty of commands to check information about the hardware of your linux system. Some commands report only specific hardware components like cpu or memory while the rest cover multiple hardware units.

This post takes a quick look at some of the most commonly used commands to check information and configuration details about various hardware peripherals and devices. The list includes lscpu, hwinfo, lshw, dmidecode, lspci etc.

1. lscpu

The lscpu command reports information about the cpu and processing units. It does not have any further options or functionality.

$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 23
Stepping:              10
CPU MHz:               1998.000
BogoMIPS:              5302.48
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              2048K
NUMA node0 CPU(s):     0-3

2. lshw – List Hardware

A general purpose utility, that reports detailed and brief information about multiple different hardware units such as cpu, memory, disk, usb controllers, network adapters etc. Lshw extracts the information from different /proc files.

$ sudo lshw -short

H/W path        Device      Class       Description
===================================================
                            system      ()
/0                          bus         DG35EC
/0/0                        processor   Intel(R) Core(TM)2 Quad CPU    Q8400  @ 2.66GHz
/0/0/1                      memory      2MiB L2 cache
/0/0/3                      memory      32KiB L1 cache
/0/2                        memory      32KiB L1 cache
/0/4                        memory      64KiB BIOS
/0/14                       memory      8GiB System Memory
/0/14/0                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/14/1                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/14/2                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/14/3                     memory      2GiB DIMM DDR2 Synchronous 667 MHz (1.5 ns)
/0/100                      bridge      82G35 Express DRAM Controller
/0/100/2                    display     82G35 Express Integrated Graphics Controller
/0/100/2.1                  display     82G35 Express Integrated Graphics Controller
/0/100/19       eth0        network     82566DC Gigabit Network Connection
/0/100/1a                   bus         82801H (ICH8 Family) USB UHCI Controller #4
/0/100/1a.1                 bus         82801H (ICH8 Family) USB UHCI Controller #5
/0/100/1a.7                 bus         82801H (ICH8 Family) USB2 EHCI Controller #2
/0/100/1b                   multimedia  82801H (ICH8 Family) HD Audio Controller
/0/100/1c                   bridge      82801H (ICH8 Family) PCI Express Port 1
/0/100/1c.1                 bridge      82801H (ICH8 Family) PCI Express Port 2
/0/100/1c.2                 bridge      82801H (ICH8 Family) PCI Express Port 3
/0/100/1c.2/0               storage     JMB368 IDE controller
/0/100/1d                   bus         82801H (ICH8 Family) USB UHCI Controller #1
/0/100/1d.1                 bus         82801H (ICH8 Family) USB UHCI Controller #2
/0/100/1d.2                 bus         82801H (ICH8 Family) USB UHCI Controller #3
/0/100/1d.7                 bus         82801H (ICH8 Family) USB2 EHCI Controller #1
/0/100/1e                   bridge      82801 PCI Bridge
/0/100/1e/5                 bus         FW322/323 [TrueFire] 1394a Controller
/0/100/1f                   bridge      82801HB/HR (ICH8/R) LPC Interface Controller
/0/100/1f.2                 storage     82801H (ICH8 Family) 4 port SATA Controller [IDE mode]
/0/100/1f.3                 bus         82801H (ICH8 Family) SMBus Controller
/0/100/1f.5                 storage     82801HR/HO/HH (ICH8R/DO/DH) 2 port SATA Controller [IDE m
/0/1            scsi3       storage     
/0/1/0.0.0      /dev/sda    disk        500GB ST3500418AS
/0/1/0.0.0/1    /dev/sda1   volume      70GiB Windows NTFS volume
/0/1/0.0.0/2    /dev/sda2   volume      395GiB Extended partition
/0/1/0.0.0/2/5  /dev/sda5   volume      97GiB HPFS/NTFS partition
/0/1/0.0.0/2/6  /dev/sda6   volume      97GiB Linux filesystem partition
/0/1/0.0.0/2/7  /dev/sda7   volume      1952MiB Linux swap / Solaris partition
/0/1/0.0.0/2/8  /dev/sda8   volume      198GiB Linux filesystem partition
/0/3            scsi4       storage     
/0/3/0.0.0      /dev/cdrom  disk        DVD RW DRU-190A

Check out the following post to learn more about lshw

Get hardware information on Linux with lshw command

Lire la suite…

Categories: Système Tags: ,

inotify / incron : Lancer une commande en cas d’action sur un fichier/un répertoire

04/04/2019 Comments off

inotify”, Remplaçant de “dnotify”, est une technologie, intégrée au noyau Linux (>=2.6.13) , destinée à notifier les événements, modifications, accès, etc, effectués sur le contenu d’un système de fichiers en ce basant sur le contrôle des “inodes” (structures de données contenant des informations sur les fichiers d’un systèmes de fichiers).

« incron« , Pour « INotify CRON », permet d’exploiter les informations « d’inotify » afin d’effectuer une action, commande(s), scripts, etc, en cas de modifications de fichiers ou de répertoires donnés.

Installation de inotify

« inotify » Est intégré au noyau et est activé dans les kernels fournis par les distributions.

« incron » Est, de son côté, empaqueté sur la plupart des distributions mais nécessitera, sur Red Hat et ses dérivées (CentOS, Scientific Linux, etc) l’ajout des miroirs EPEL (« Extra Packages for Enterprise Linux » voir « Ajout des miroirs EPEL (Extra Packages for Enterprise Linux) sous Red Hat like (CentOS, RHEL, SL, …) » sur Admin Linux). L’installation s’effectue via le gestionnaire de paquet de votre distribution.

Sous Ubuntu, Debian et ses dérivés :

# apt-get install incron

Sous les dérivés de Red Hat :

# yum install incron

Sous Gentoo Linux « emerge » se chargera de l’installation tous comme « pacman » le fera très bien sous Arch Linux.

Lire la suite…