Archive

Archives pour la catégorie ‘Système’

How to monitor a log file on Linux with logwatch

16/04/2019 Comments off

Linux operating system and many applications create special files commonly referred to as « logs » to record their operational events. These system logs or application-specific log files are an essential tool when it comes to understanding and troubleshooting the behavior of the operating system and third-party applications. However, log files are not precisely what you would call « light » or « easy » reading, and analyzing raw log files by hand is often time-consuming and tedious. For that reason, any utility that can convert raw log files into a more user-friendly log digest is a great boon for sysadmins.

logwatch is an open-source log parser and analyzer written in Perl, which can parse and convert raw log files into a structured format, making a customizable report based on your use cases and requirements. In logwatch, the focus is on producing more easily consumable log summary, not on real-time log processing and monitoring. As such, logwatch is typically invoked as an automated cron task with desired time and frequency, or manually from the command line whenever log processing is needed. Once a log report is generated, logwatch can email the report to you, save it to a file, or display it on the screen.

A logwatch report is fully customizable in terms of verbosity and processing coverage. The log processing engine of logwatch is extensible, in a sense that if you want to enable logwatch for a new application, you can write a log processing script (in Perl) for the application’s log file, and plug it under logwatch.

One downside of logwatch is that it does not include in its report detailed timestamp information available in original log files. You will only know that a particular event was logged in a requested range of time, and you will have to access original log files to get exact timing information.

Installing Logwatch

On Debian and derivatives:

# aptitude install logwatch

On Red Hat-based distributions:

# yum install logwatch

Configuring Logwatch

During installation, the main configuration file (logwatch.conf) is placed in /etc/logwatch/conf. Configuration options defined in this file override system-wide settings defined in /usr/share/logwatch/default.conf/logwatch.conf.

If logwatch is launched from the command line without any arguments, the custom options defined in /etc/logwatch/conf/logwatch.conf will be used. However, if any command-line arguments are specified with logwatch command, those arguments in turn override any default/custom settings in /etc/logwatch/conf/logwatch.conf.

In this article, we will customize several default settings of logwatch by editing /etc/logwatch/conf/logwatch.conf file.

Detail = <Low, Med, High, or a number>

« Detail » directive controls the verbosity of a logwatch report. It can be a positive integer, or High, Med, Low, which correspond to 10, 5, and 0, respectively.

MailTo = youremailaddress@yourdomain.com

« MailTo » directive is used if you want to have a logwatch report emailed to you. To send a logwatch report to multiple recipients, you can specify their email addresses separated with a space. To be able to use this directive, however, you will need to configure a local mail transfer agent (MTA) such as sendmail or Postfix on the server where logwatch is running.

Range = <Yesterday|Today|All>

« Range » directive specifies the time duration of a logwatch report. Common values for this directive are Yesterday, Today or All. When « Range = All » is used, « Archive = yes » directive is also needed, so that all archived versions of a given log file (e.g., /var/log/maillog, /var/log/maillog.X, or /var/log/maillog.X.gz) are processed.

Besides such common range values, you can also use more complex range options such as the following.

  • Range = « 2 hours ago for that hour »
  • Range = « -5 days »
  • Range = « between -7 days and -3 days »
  • Range = « since September 15, 2014 »
  • Range = « first Friday in October »
  • Range = « 2014/10/15 12:50:15 for that second »

To be able to use such free-form range examples, you need to install Date::Manip Perl module from CPAN. Refer to this post for CPAN module installation instructions.

Service = <service-name-1>
Service = <service-name-2>
. . .

« Service » option specifies one or more services to monitor using logwath. All available services are listed in /usr/share/logwatch/scripts/services, which cover essential system services (e.g., pam, secure, iptables, syslogd), as well as popular application services such as sudo, sshd, http, fail2ban, samba. If you want to add a new service to the list, you will have to write a corresponding log processing Perl script, and place it in this directory.

If this option is used to select specific services, you need to comment out the line « Service = All » in /usr/share/logwatch/default.conf/logwatch.conf.

14948933564_f6fb05382d_o

Format = <text|html>

« Format » directive specifies the format (e.g., text or HTML) of a logwatch report.

Output = <file|mail|stdout>

« Output » directive indicates where a logwatch report should be sent. It can be saved to a file (file), emailed (mail), or shown to screen (stdout).

Lire la suite…

Categories: Système Tags: ,

How to Backup Linux? 15 rsync Command Examples

15/04/2019 Comments off

rsync stands for remote sync.

rsync is used to perform the backup operation in UNIX / Linux.

rsync utility is used to synchronize the files and directories from one location to another in an effective way. Backup location could be on local server or on remote server.

Important features of rsync

  • Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
  • Security: rsync allows encryption of data using ssh protocol during transfer.
  • Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
  • Privileges: No special privileges are required to install and execute rsync

Syntax

$ rsync options source destination

Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.

Example 1. Synchronize Two Directories in a Local Server

To sync two directories in a local computer, use the following rsync -zvr command.

$ rsync -zvr /var/opt/installation/inventory/ /root/temp
building file list ... done
sva.xml
svB.xml
.
sent 26385 bytes  received 1098 bytes  54966.00 bytes/sec
total size is 44867  speedup is 1.63
$

In the above rsync example:

  • -z is to enable compression
  • -v verbose
  • -r indicates recursive

Now let us see the timestamp on one of the files that was copied from source to destination. As you see below, rsync didn’t preserve timestamps during sync.

$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml
-r--r--r-- 1 bin  bin  949 Jun 18  2009 /var/opt/installation/inventory/sva.xml
-r--r--r-- 1 root bin  949 Sep  2  2009 /root/temp/sva.xml

Example 2. Preserve timestamps during Sync using rsync -a

rsync option -a indicates archive mode. -a option does the following,

  • Recursive mode
  • Preserves symbolic links
  • Preserves permissions
  • Preserves timestamp
  • Preserves owner and group

Now, executing the same command provided in example 1 (But with the rsync option -a) as shown below:

$ rsync -azv /var/opt/installation/inventory/ /root/temp/
building file list ... done
./
sva.xml
svB.xml
.
sent 26499 bytes  received 1104 bytes  55206.00 bytes/sec
total size is 44867  speedup is 1.63
$

As you see below, rsync preserved timestamps during sync.

$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml
-r--r--r-- 1 root  bin  949 Jun 18  2009 /var/opt/installation/inventory/sva.xml
-r--r--r-- 1 root  bin  949 Jun 18  2009 /root/temp/sva.xml

Lire la suite…

Categories: Réseau, Système Tags: ,

Collect & visualize your logs with Logstash, Elasticsearch & Redis

14/04/2019 Comments off

Source: michael.bouvy.net

Update of December 6th : although Logstash does the job as a log shipper, you might consider replacing it with Lumberjack / Logstash Forwarder, which needs way less resources, and keep Logstash on your indexer to collect, transform and index your logs data (into ElasticSearch) : check out my latest blog post on the topic.

Kibana_medium

Kibana Dashboard


Even if you manage a single Linux server, you probably already know how hard it is to keep an eye on what’s going on with your server, and especially tracking logs data. And this becomes even worse when you have several (physical or virtual) servers to administrate.

 

Although Munin is very helpful monitoring various informations from my servers / VMs, I felt the need of something more, and bit less static / more interactive.

There are 3 kind of logs I especially wanted to track :

  • Apache 2 access logs
  • iptables logs
  • Syslogs

After searching arround on the internet for a great tool that would help me, I read about the open source log management tool Logstash which seems to perfectly suit a (major) part of my needs : logs collecting / processing.

For the purpose of this post, I will take the following network architecture and assume and I want to collect my Apache, iptables, system logs from servers 1/2/3 (“shippers”) on server 4 (“indexer”) and visualize them :

logstach-archi1

As you can see, I am using 4 complementary applications, the role of each one being :

  • Logstash : logs collector, processor and shipper (to Redis) on log “shippers” 1-3 ; logs indexer on server 4 (reads from Redis, writes to Elasticsearch)
  • Redis : logs data broker, receiving data from log “shippers” 1-3
  • Elasticsearch : logs data persistent storage
  • Kibana : (time-based) logs data visualization (graphs, tables, etc.)

Lire la suite…

Categories: Système Tags: , , ,

How to secure SSH login with one-time passwords on Linux

13/04/2019 Comments off

As someone says, security is a not a product, but a process. While SSH protocol itself is cryptographically secure by design, someone can wreak havoc on your SSH service if it is not administered properly, be it weak passwords, compromised keys or outdated SSH client.

As far as SSH authentication is concerned, public key authentication is in general considered more secure than password authentication. However, key authentication is actually not desirable or even less secure if you are logging in from a public or shared computer, where things like stealth keylogger or memory scraper can always a possibility. If you cannot trust the local computer, it is better to use something else. This is when « one-time passwords » come in handy. As the name implies, each one-time password is for single-use only. Such disposable passwords can be safely used in untrusted environments as they cannot be re-used even when they are stolen.

One way to generate disposable passwords is via Google Authenticator. In this tutorial, I am going to demonstrate another way to create one-time passwords for SSH login: OTPW, a one-time password login package. Unlike Google Authenticator, you do not rely on any third party for one-time password generation and verification.

What is OTPW?

OTPW consists of one-time password generator and PAM-integrated verification routines. In OTPW, one-time passwords are generated apriori with the generator, and carried by a user securely (e.g., printed in a paper sheet). Cryptographic hash of the generated passwords are then stored in the SSH server host. When a user logs in with a one-time password, OTPW’s PAM module verifies the password, and invalidates it to prevent re-use.

Step One: Install and Configure OTPW on Linux

Debian, Ubuntu or Linux Mint:

Install OTPW packages with aptget.

$ sudo apt-get install libpam-otpw otpw-bin

Open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).

#@include common-auth

and add the following two lines (to enable one-time password authentication):

auth       required     pam_otpw.so
session    optional     pam_otpw.so

16775121360_d1f93feefa_b

Fedora or CentOS/RHEL:

OTPW is not available as a prebuilt package on Red Hat based systems. So let’s install OTPW by building it from the source.

First, install prerequites:

$ sudo yum git gcc pam-devel
$ git clone https://www.cl.cam.ac.uk/~mgk25/git/otpw
$ cd otpw

Open Makefile with a text editor, and edit a line that starts with « PAMLIB= » as follows.

On 64-bit system:

PAMLIB=/usr/lib64/security

On 32-bit system:

PAMLIB=/usr/lib/security

Compile and install it. Note that installation will automatically restart an SSH server. So be ready to be disconnected if you are on an SSH connection.

$ make
$ sudo make install

Now you need to update SELinux policy since /usr/sbin/sshd tries to write to user’s home directory, which is not allowed by default SELinux policy. The following commands will do. If you are not using SELinux, skip this step.

$ sudo grep sshd /var/log/audit/audit.log | audit2allow -M mypol
$ sudo semodule -i mypol.pp

Next, open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).

#auth       substack     password-auth

and add the following two lines (to enable one-time password authentication):

auth       required     pam_otpw.so
session    optional     pam_otpw.so

Lire la suite…

How to run iptables automatically after reboot on Debian

13/04/2019 Comments off

reboot iptablesIf you have customized iptables rules, and would like to load the customized iptables rules persistently across reboots on Debian, you can leverage if-up.d scripts that are located in /etc/network/if-up.d. On Debian, any script that is marked as executable and placed in /etc/network/if-up.d gets executed when a network interface is brought up.

In order to run iptables automatically after reboot on Debian, do the following.

First, customize iptables as you wish, and then save the current iptables rule-set using iptables-save command.

$ sudo iptables-save > /etc/firewall.conf

The above command will dump the current iptables rule set into /etc/firewall.conf file which iptables-restore command can later use to restore the same rule set.

Now create the following if-up.d script called iptables that restores the saved iptables rule set.

$ sudo vi /etc/network/if-up.d/iptables
#!/bin/sh
iptables-restore < /etc/firewall.conf
$ sudo chmod  x /etc/network/if-up.d/iptables

Alternatively, you can add « iptables-restore < /etc/firewall.conf » command to /etc/rc.local, which gets executed at the end of system boot-up.

Source: Xmodulo