Archive

Articles taggués ‘munin’

Surveillance système des machines sur un réseau avec Munin

30/10/2018 Comments off

I. Introduction

muninMunin est un outil de surveillance basé sur le célèbre RRDTool, permettant de connaître toutes les données systèmes des autres ordinateurs du réseau. Il les présente automatiquement sous forme de graphiques consultables depuis une page web. Par ailleurs, il dispose d’un système de plugins qui le rend simple d’utilisation et très modulaire.

J’ai choisi de le présenter, et non certains de ses concurrents comme Nagios, Cacti ou Zabbix, car il m’a semblé être le plus simple d’utilisation tout en conservant de fortes possibilités d’adaptation.

Un système Munin est composé de :

  • un serveur principal, récupérant les informations
  • un nœud par équipement à surveiller

Il faut signaler qu’avec une telle architecture Munin se différencie de Nagios. Ce dernier préfère en effet centraliser toutes les mesures sur le serveur, ce qui permet de ne rien installer sur les équipements surveillés.

Lire la suite…

Monitorer fail2ban

25/10/2018 Comments off

Source: arscenic.org

Nous avons installé fail2ban sur chacun de nos serveurs en utilisant ce tutoriel

Le script de monitoring que nous allons utiliser se trouve ici : http://blog.sinnwidrig.org/?p=50. C’est un script générique permettant de créer un graphique distinct par action de fail2ban.

On décide de placer les plugins supplémentaires pour munin dans le répertoire  /opt/share/munin/plugins
#Créer le répertoire des plugins dans le cas ou il n'existe pas déjà
sudo mkdir -p /opt/share/munin/plugins
cd /opt/share/munin/plugins
# Récupérer le script
sudo wget http://blog.sinnwidrig.org/files/fail2ban_-0.1
# Rendre exécutable le script
sudo chmod +x fail2ban_-0.1

Lire la suite…

Monitoring de serveurs avec Munin

24/10/2018 Comments off

Installation

Munin est un outil de monitoring fonctionnant par script et collectant des informations sur le systeme à interval régulier (5min par défaut), Munin affiche des graphiques concernants des informations qu’il collecte dans /proc. Les graphiques sont accessibles via apache, il faut donc également sécuriser l’accès à celui-ci. Il faut installer trois choses différentes :monitoring munin

  • le programme munin,
  • le daemon munin-node qui va monitorer les informations système,
  • et apache2 qui nous permettra de visualiser les graphiques.

Lire la suite…

Basic munin plugins for Snort

22/10/2018 Comments off

munin pluginsHere are some basic munin plugins for snort using perfmon (Enable preprocessor perfmonitor in snort.conf)
The snort.conf entry should look something like:

preprocessor perfmonitor: time 300 file /your/path/to/snort.stats pktcnt 5000

(Read the snort docs for more info on performance issues etc.)

Drop Rate:
http://download.gamelinux.org/snort/snort_drop_rate

Pattern Matching:
http://download.gamelinux.org/snort/snort_pattern_match

Traffic speed:
http://download.gamelinux.org/snort/snort_traffic

Alerts:
http://download.gamelinux.org/snort/snort_alerts

Avg KBytes/pkt:
http://download.gamelinux.org/snort/snort_bytes_pkt

Avg Pkts/sec:
http://download.gamelinux.org/snort/snort_pkts

Edit any one of them, to graph what you want from perfmon output. It should be easy!

And now I will test them myself!

Update:
Here is a picture to give you an idea on how the graphs looks:
http://download.gamelinux.org/snort/Snort-Munin-Plugins.pngsource: http://www.gamelinux.org/?p=32

source: GAMELINUX

What are good open-source log monitoring tools on Linux ?

10/03/2016 Comments off


In an operating system, logs are all about keeping track of events, be it critical system errors, resource usage warnings, transaction history, application status, or user activities. These logs, which are stored as (text or binary) files in the system, are useful for system auditing, debugging and maintenance. However, with so many different system entities generating log files, and even at growing rate, the challenge as a system admin is to how to « consume » these log files effectively.

That’s when log monitoring tools come into the picture, which streamline the often laborious process of collecting, parsing and analyzing log files, as well as alerting system admins for any interesting events. These tools are designed from ground up focused on log monitoring, so they offer many attractive features, such as scalable log aggregation and filtering, human-readable display, event correlation, visual or email notification, flexible log retention policy, and so on.

In this post, I am going to introduce a list of popular open-source log monitoring software for Linux, ranging from simple log file viewers to full-blown log monitoring frameworks.

Log Aggregation and Filtering

Log monitoring would not be possible without efficient and scalable mechanisms to collect and pre-process log files. Tools in this category focus on shipping, collecting, filtering, indexing and storing log files, so that they can be further analyzed and visualized in subsequent monitoring pipelines.

1. rsyslog: an open-source log collector server which can filter and consolidate log data (based on syslog protocol) from different hosts and devices in the network. rsyslog can be configured as a server or a client, where the former plays the role of a log collector and the latter runs as a log sender.

2. syslog-ng: another open-source implementation of the syslog protocol with more advanced and user-friendly features such as content-based filtering, easier-to-understand config format, and real-time event correlation.

3. systemd journal: systemd journal can be configured for remote journal logging, where locally logged events are forwarded to a remote server over HTTP. In this setup, systemd-journal-upload on a client host serializes and forwards journal messages to systemd-journal-remote running on a remote collector server.

4. logstash: an open-source tool that collects, parses, and stores log files for offline search and analysis. logstash can run in various pipelines due to many plugins supporting different input/output interfaces, decoding/encoding, and filtering rules. Input plugins allows logstash to gather log files from different sources and protocols (e.g., files, S3, RabbitMQ, syslog, collectd, TCP/UDP sockets). Filter/codec plugins are used to parse, convert, modify and add metadata to log files. Output plugins pass processed log files to various target storages (e.g., file, Google cloud storage, Nagios, S3, Zabbix).

5. collectd: a daemon service which gathers various system-level statistics, and stores them for historical analysis or real-time graphing. Similar to logstash, collectd is an extensible architecture, where you can enable various input/output plugins to change its collection behaviors. For log collection, collectd can leverage LogFile and Network plugins to aggregate remote log files.

6. Logster: an open-source utility for parsing log files for any interesting data, and aggregating extracted data into metrics for subsequent reporting and graphing pipelines.

7. Fluentd: a unified log aggregation layer which allows in-stream processing for a variety of streaming data and log files. It comes with a huge plugin ecosystem with more than 300 plugins to support various input sources and output interfaces.

8. Nxlog: a unified log collector and forwarder which supports a variety of log sources, formats and protocols. Advanced features include multi-threaded log collection and processing, message buffering and prioritization, built-in log rotation, and TLS/SSL transport.

9. Scribe:: a scalable log collector server developed by Facebook. Scribe can aggregate log data which is streamed in real time from a large number of clients. It uses Apache Thrift for protocol encoding, so its interface is compatible with pretty much any languages. While a proven solution, Scribe is not something you can deploy quickly as a turnkey. Also, note that Scribe is no longer updated and maintained.

10. Flume: a highly scalable and reliable service to transport and collect large volumes of streaming log data from any clients, and store them in backend storage such as Apache Hadoops’ HDFS.

Lire la suite…