Archive

Articles taggués ‘logs’

How to monitor a log file on Linux with logwatch

31/10/2023 Comments off

Linux operating system and many applications create special files commonly referred to as “logs” to record their operational events. These system logs or application-specific log files are an essential tool when it comes to understanding and troubleshooting the behavior of the operating system and third-party applications. However, log files are not precisely what you would call “light” or “easy” reading, and analyzing raw log files by hand is often time-consuming and tedious. For that reason, any utility that can convert raw log files into a more user-friendly log digest is a great boon for sysadmins.

logwatch is an open-source log parser and analyzer written in Perl, which can parse and convert raw log files into a structured format, making a customizable report based on your use cases and requirements. In logwatch, the focus is on producing more easily consumable log summary, not on real-time log processing and monitoring. As such, logwatch is typically invoked as an automated cron task with desired time and frequency, or manually from the command line whenever log processing is needed. Once a log report is generated, logwatch can email the report to you, save it to a file, or display it on the screen.

A logwatch report is fully customizable in terms of verbosity and processing coverage. The log processing engine of logwatch is extensible, in a sense that if you want to enable logwatch for a new application, you can write a log processing script (in Perl) for the application’s log file, and plug it under logwatch.

One downside of logwatch is that it does not include in its report detailed timestamp information available in original log files. You will only know that a particular event was logged in a requested range of time, and you will have to access original log files to get exact timing information.

Installing Logwatch

On Debian and derivatives:

# aptitude install logwatch

On Red Hat-based distributions:

# yum install logwatch

Configuring Logwatch

During installation, the main configuration file (logwatch.conf) is placed in /etc/logwatch/conf. Configuration options defined in this file override system-wide settings defined in /usr/share/logwatch/default.conf/logwatch.conf.

If logwatch is launched from the command line without any arguments, the custom options defined in /etc/logwatch/conf/logwatch.conf will be used. However, if any command-line arguments are specified with logwatch command, those arguments in turn override any default/custom settings in /etc/logwatch/conf/logwatch.conf.

In this article, we will customize several default settings of logwatch by editing /etc/logwatch/conf/logwatch.conf file.

Detail = <Low, Med, High, or a number>

“Detail” directive controls the verbosity of a logwatch report. It can be a positive integer, or High, Med, Low, which correspond to 10, 5, and 0, respectively.

MailTo = youremailaddress@yourdomain.com

“MailTo” directive is used if you want to have a logwatch report emailed to you. To send a logwatch report to multiple recipients, you can specify their email addresses separated with a space. To be able to use this directive, however, you will need to configure a local mail transfer agent (MTA) such as sendmail or Postfix on the server where logwatch is running.

Range = <Yesterday|Today|All>

“Range” directive specifies the time duration of a logwatch report. Common values for this directive are Yesterday, Today or All. When “Range = All” is used, “Archive = yes” directive is also needed, so that all archived versions of a given log file (e.g., /var/log/maillog, /var/log/maillog.X, or /var/log/maillog.X.gz) are processed.

Besides such common range values, you can also use more complex range options such as the following.

  • Range = “2 hours ago for that hour”
  • Range = “-5 days”
  • Range = “between -7 days and -3 days”
  • Range = “since September 15, 2014”
  • Range = “first Friday in October”
  • Range = “2014/10/15 12:50:15 for that second”

To be able to use such free-form range examples, you need to install Date::Manip Perl module from CPAN. Refer to this post for CPAN module installation instructions.

Service = <service-name-1>
Service = <service-name-2>
. . .

“Service” option specifies one or more services to monitor using logwath. All available services are listed in /usr/share/logwatch/scripts/services, which cover essential system services (e.g., pam, secure, iptables, syslogd), as well as popular application services such as sudo, sshd, http, fail2ban, samba. If you want to add a new service to the list, you will have to write a corresponding log processing Perl script, and place it in this directory.

If this option is used to select specific services, you need to comment out the line “Service = All” in /usr/share/logwatch/default.conf/logwatch.conf.

14948933564_f6fb05382d_o

Format = <text|html>

“Format” directive specifies the format (e.g., text or HTML) of a logwatch report.

Output = <file|mail|stdout>

“Output” directive indicates where a logwatch report should be sent. It can be saved to a file (file), emailed (mail), or shown to screen (stdout).

Lire la suite…

Categories: Système Tags: ,

Collect & visualize your logs with Logstash, Elasticsearch & Redis

28/10/2023 Comments off

Source: michael.bouvy.net

Update of December 6th : although Logstash does the job as a log shipper, you might consider replacing it with Lumberjack / Logstash Forwarder, which needs way less resources, and keep Logstash on your indexer to collect, transform and index your logs data (into ElasticSearch) : check out my latest blog post on the topic.

Kibana_medium

Kibana Dashboard


Even if you manage a single Linux server, you probably already know how hard it is to keep an eye on what’s going on with your server, and especially tracking logs data. And this becomes even worse when you have several (physical or virtual) servers to administrate.

 

Although Munin is very helpful monitoring various informations from my servers / VMs, I felt the need of something more, and bit less static / more interactive.

There are 3 kind of logs I especially wanted to track :

  • Apache 2 access logs
  • iptables logs
  • Syslogs

After searching arround on the internet for a great tool that would help me, I read about the open source log management tool Logstash which seems to perfectly suit a (major) part of my needs : logs collecting / processing.

For the purpose of this post, I will take the following network architecture and assume and I want to collect my Apache, iptables, system logs from servers 1/2/3 (“shippers”) on server 4 (“indexer”) and visualize them :

logstach-archi1

As you can see, I am using 4 complementary applications, the role of each one being :

  • Logstash : logs collector, processor and shipper (to Redis) on log “shippers” 1-3 ; logs indexer on server 4 (reads from Redis, writes to Elasticsearch)
  • Redis : logs data broker, receiving data from log “shippers” 1-3
  • Elasticsearch : logs data persistent storage
  • Kibana : (time-based) logs data visualization (graphs, tables, etc.)

Lire la suite…

Categories: Système Tags: , , ,

How to configure a syslog server with rsyslog on Linux

18/10/2023 Comments off

rsyslog linuxA syslog server represents a central log monitoring point on a network, to which all kinds of devices including Linux or Windows servers, routers, switches or any other hosts can send their logs over network. By setting up a syslog server, you can filter and consolidate logs from different hosts and devices into a single location, so that you can view and archive important log messages more easily.

On most Linux distributions, rsyslog is the standard syslog daemon that comes pre-installed. Configured in a client/server architecture, rsyslog can play both roles; as a syslog server rsyslog can gather logs from other devices, and as a syslog client, rsyslog can transmit its internal logs to a remote syslog server.

In this tutorial, we cover how to configure a centralized syslog server using rsyslog on Linux. Before we go into the details, it is instructive to go over syslog standard first.

Basic of Syslog Standard

When logs are collected with syslog mechanism, three important things must be taken into consideration:

  • Facility level: what type of processes to monitor
  • Severity (priority) level: what type of log messages to collect
  • Destination: where to send or record log messages

Let’s take a look at how the configuration is defined in more detail.

The facility levels define a way to categorize internal system processes. Some of the common standard facilities in Linux are:

  • auth: messages related to authentication (login)
  • cron: messages related to scheduled processes or applications
  • daemon: messages related to daemons (internal servers)
  • kernel: messages related to the kernel
  • mail: messages related to internal mail servers
  • syslog: messages related to the syslog daemon itself
  • lpr: messages related to print servers
  • local0 – local7: messages defined by user (local7 is usually used by Cisco and Windows servers)

The severity (priority) levels are standardized, and defined by using standard abbreviation and an assigned number with number 7 being the highest level of all. These levels are:

  • emerg: Emergency – 0
  • alert: Alerts – 1
  • crit: Critical – 2
  • err: Errors – 3
  • warn: Warnings – 4
  • notice: Notification – 5
  • info: Information – 6
  • debug: Debugging – 7

Finally, the destination statement enforces a syslog client to perform one of three following tasks: (1) save log messages on a local file, (2) route them to a remote syslog server over TCP/UDP, or (3) send them to stdout such as a console.

In rsyslog, syslog configuration is structured based on the following schema.

[facility-level].[severity-level]  [destination]

Lire la suite…

Categories: Système Tags: , ,

Force iptables to log messages to a different log file

16/10/2023 Comments off

Source: nixCraft

According to man page:
Iptables is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user defined chains.

By default, Iptables log message to a /var/log/messages file. However you can change this location. I will show you how to create a new logfile called /var/log/iptables.log. Changing or using a new file allows you to create better statistics and/or allows you to analyze the attacks.

Iptables default log file

For example, if you type the following command, it will display current iptables log from /var/log/messages file:

# tail -f /var/log/messages

Output:

Oct  4 00:44:28 debian gconfd (vivek-4435): Resolved address "xml:readonly:/etc/gconf/gconf.xml.defaults" to a read-only configuration source at position 2
Oct  4 01:14:19 debian kernel: IN=ra0 OUT= MAC=00:17:9a:0a:f6:44:00:08:5c:00:00:01:08:00 SRC=200.142.84.36 DST=192.168.1.2 LEN=60 TOS=0x00 PREC=0x00 TTL=51 ID=18374 DF PROTO=TCP SPT=46040 DPT=22 WINDOW=5840 RES=0x00 SYN URGP=0
Oct  4 00:13:55 debian kernel: IN=ra0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:18:de:55:0a:56:08:00 SRC=192.168.1.30 DST=192.168.1.255LEN=78 TOS=0x00 PREC=0x00 TTL=128 ID=13461 PROTO=UDP SPT=137 DPT=137 LEN=58

Procedure to log the iptables messages to a different log file

Open your /etc/syslog.conf file:

# vi /etc/syslog.conf

Append following line

kern.warning /var/log/iptables.log

Save and close the file.

Restart the syslogd (Debian / Ubuntu Linux):

# /etc/init.d/sysklogd restart

On the other hand, use following command to restart syslogd under Red Hat/Cent OS/Fedora Core Linux:

# /etc/init.d/syslog restart

Now make sure you pass the log-level 4 option with log-prefix to iptables. For example:

# DROP everything and Log it
iptables -A INPUT -j LOG --log-level 4
iptables -A INPUT -j DROP

For example, drop and log all connections from IP address 64.55.11.2 to your /var/log/iptables.log file:

iptables -A INPUT -s 64.55.11.2 -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix '** HACKERS **'--log-level 4
iptables -A INPUT -s 64.55.11.2 -j DROP

Where,

  • –log-level 4: Level of logging. The level # 4 is for warning.
  • –log-prefix ‘*** TEXT ***’: Prefix log messages with the specified prefix (TEXT); up to 29 letters long, and useful for distinguishing messages in the logs.

You can now see all iptables message logged to /var/log/iptables.log file:

# tail -f /var/log/iptables.log