Archive

Articles taggués ‘monitoring’

How to analyze and view Apache web server logs interactively on Linux

25/10/2023 Comments off

analyze apache logsWhether you are in the web hosting business, or run a few web sites on a VPS yourself, chances are you want to display visitor statistics such as top visitors, requested files (dynamic or static), used bandwidth, client browsers, and referring sites, and so forth.

GoAccess is a command-line log analyzer and interactive viewer for Apache or Nginx web server. With this tool, you will not only be able to browse the data mentioned earlier, but also parse the web server logs to dig for further data as well – and all of this within a terminal window in real time. Since as of today most web servers use either a Debian derivative or a Red Hat based distribution as the underlying operating system, I will show you how to install and use GoAccess in Debian and CentOS.

Installing GoAccess on Linux

In Debian, Ubuntu and derivatives, run the following command to install GoAccess:

# aptitude install goaccess

In CentOS, you’ll need to enable the EPEL repository and then:

# yum install goaccess

In Fedora, simply use yum command:

# yum install goaccess

If you want to install GoAccess from the source to enable further options (such as GeoIP location), install required dependencies for your operating system, and then follow these steps:

# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install

That will install version 0.8.5, but you can always verify what is the latest version in the Downloads page of the project’s web site.

Since GoAccess does not require any further configurations, once it’s installed you are ready to go.

Running GoAccess

To start using GoAccess, just run it against your Apache access log.

For Debian and derivatives:

# goaccess -f /var/log/apache2/access.log

For Red Hat based distros:

# goaccess -f /var/log/httpd/access_log

When you first launch GoAccess, you will be presented with the following screen to choose the date and log format. As explained, you can toggle between options using the spacebar and proceed with F10. As for the date and log formats, you may want to refer to the Apache documentation if you need to refresh your memory.

In this case, Choose Common Log Format (CLF):

15868350373_30c16d7c30

and then press F10. You will be presented with the statistics screen. For the sake of brevity, only the header, which shows the summary of the log file, is shown in the next image:

16486742901_7a35b5df69_b

Lire la suite…

How to measure packets per second or throughput on high speed network interface

24/10/2023 Comments off

throughput networkThere are many traffic monitoring tools available on Linux, which can monitor/classify network traffic, and report real-time traffic statistics in fancy user interfaces. Most of these tools (e.g., ntopng, iftop) are powered by libpcap, which is a packet capture library used to monitor network traffic in user space. Despite their versatility, however, libpcap-based network monitoring tools cannot scale to handle traffic on multi Gigabit rate network interfaces, due to the overhead associated with user-space packet capture.

In this tutorial, I will present simple shell scripts that can monitor network traffic on per-interface basis, without relying on slow libpcap library. These scripts are fast enough to support multi Gigabit rates, but only suitable if you are interested in “aggregate” network statistics on per interface basis.

The secret for the scripts lies in sysfs virtual filesystem which is used by the kernel to export device- or driver-related information to user space. Network interface related statistics are exported via /sys/class/net/<ethX>/statistics.

For example, the statistics on eth0 interface are found in these files:

  • /sys/class/net/eth0/statistics/rx_packets: number of packets received
  • /sys/class/net/eth0/statistics/tx_packets: number of packets transmitted
  • /sys/class/net/eth0/statistics/rx_bytes: number of bytes received
  • /sys/class/net/eth0/statistics/tx_bytes: number of bytes transmitted
  • /sys/class/net/eth0/statistics/rx_dropped: number of packets dropped while received
  • /sys/class/net/eth0/statistics/tx_dropped: number of packets dropped while transmitted

The numbers stored in the files are automatically refreshed in real-time by the kernel. Therefore, you can write scripts that calculate traffic statistics based on these files.

The following are two such scripts (thanks to joemiller). The first script counts the number of packets per second, received (RX) or sent (TX) on an interface, while the latter scripts measures the network bandwidth of incoming (RX) and outgoing (TX) traffic on an interface. For these scripts to work, you do not need to install anything.

Measure Packets per Second on an Interface

#!/bin/bash
INTERVAL="1"  # update interval in seconds
if [ -z "$1" ]; then
        echo
        echo usage: $0 [network-interface]
        echo
        echo e.g. $0 eth0
        echo
        echo shows packets-per-second
        exit
fi

IF=$1

while true
do
        R1=`cat /sys/class/net/$1/statistics/rx_packets`
        T1=`cat /sys/class/net/$1/statistics/tx_packets`
        sleep $INTERVAL
        R2=`cat /sys/class/net/$1/statistics/rx_packets`
        T2=`cat /sys/class/net/$1/statistics/tx_packets`
        TXPPS=`expr $T2 - $T1`
        RXPPS=`expr $R2 - $R1`
        echo "TX $1: $TXPPS pkts/s RX $1: $RXPPS pkts/s"
done
 

Measure Network Bandwidth on an Interface

#!/bin/bash
 
INTERVAL="1" # update interval in seconds
 
if [ -z "$1" ]; then
 echo
 echo usage: $0 [network-interface]
 echo
 echo e.g. $0 eth0
 echo
 exit
fi
 
IF=$1
 
while true
do
 R1=`cat /sys/class/net/$1/statistics/rx_bytes`
 T1=`cat /sys/class/net/$1/statistics/tx_bytes`
 sleep $INTERVAL
 R2=`cat /sys/class/net/$1/statistics/rx_bytes`
 T2=`cat /sys/class/net/$1/statistics/tx_bytes`
 TBPS=`expr $T2 - $T1`
 RBPS=`expr $R2 - $R1`
 TKBPS=`expr $TBPS / 1024`
 RKBPS=`expr $RBPS / 1024`
 echo "TX $1: $TKBPS kB/s RX $1: $RKBPS kB/s"
done
The following screenshot shows the above two scripts in action.

10700906313_cfa3cd1ed7_z

Source: Xmodulo 

What are good open-source log monitoring tools on Linux ?

20/10/2023 Comments off


In an operating system, logs are all about keeping track of events, be it critical system errors, resource usage warnings, transaction history, application status, or user activities. These logs, which are stored as (text or binary) files in the system, are useful for system auditing, debugging and maintenance. However, with so many different system entities generating log files, and even at growing rate, the challenge as a system admin is to how to “consume” these log files effectively.

That’s when log monitoring tools come into the picture, which streamline the often laborious process of collecting, parsing and analyzing log files, as well as alerting system admins for any interesting events. These tools are designed from ground up focused on log monitoring, so they offer many attractive features, such as scalable log aggregation and filtering, human-readable display, event correlation, visual or email notification, flexible log retention policy, and so on.

In this post, I am going to introduce a list of popular open-source log monitoring software for Linux, ranging from simple log file viewers to full-blown log monitoring frameworks.

Log Aggregation and Filtering

Log monitoring would not be possible without efficient and scalable mechanisms to collect and pre-process log files. Tools in this category focus on shipping, collecting, filtering, indexing and storing log files, so that they can be further analyzed and visualized in subsequent monitoring pipelines.

1. rsyslog: an open-source log collector server which can filter and consolidate log data (based on syslog protocol) from different hosts and devices in the network. rsyslog can be configured as a server or a client, where the former plays the role of a log collector and the latter runs as a log sender.

2. syslog-ng: another open-source implementation of the syslog protocol with more advanced and user-friendly features such as content-based filtering, easier-to-understand config format, and real-time event correlation.

3. systemd journal: systemd journal can be configured for remote journal logging, where locally logged events are forwarded to a remote server over HTTP. In this setup, systemd-journal-upload on a client host serializes and forwards journal messages to systemd-journal-remote running on a remote collector server.

4. logstash: an open-source tool that collects, parses, and stores log files for offline search and analysis. logstash can run in various pipelines due to many plugins supporting different input/output interfaces, decoding/encoding, and filtering rules. Input plugins allows logstash to gather log files from different sources and protocols (e.g., files, S3, RabbitMQ, syslog, collectd, TCP/UDP sockets). Filter/codec plugins are used to parse, convert, modify and add metadata to log files. Output plugins pass processed log files to various target storages (e.g., file, Google cloud storage, Nagios, S3, Zabbix).

5. collectd: a daemon service which gathers various system-level statistics, and stores them for historical analysis or real-time graphing. Similar to logstash, collectd is an extensible architecture, where you can enable various input/output plugins to change its collection behaviors. For log collection, collectd can leverage LogFile and Network plugins to aggregate remote log files.

6. Logster: an open-source utility for parsing log files for any interesting data, and aggregating extracted data into metrics for subsequent reporting and graphing pipelines.

7. Fluentd: a unified log aggregation layer which allows in-stream processing for a variety of streaming data and log files. It comes with a huge plugin ecosystem with more than 300 plugins to support various input sources and output interfaces.

8. Nxlog: a unified log collector and forwarder which supports a variety of log sources, formats and protocols. Advanced features include multi-threaded log collection and processing, message buffering and prioritization, built-in log rotation, and TLS/SSL transport.

9. Scribe:: a scalable log collector server developed by Facebook. Scribe can aggregate log data which is streamed in real time from a large number of clients. It uses Apache Thrift for protocol encoding, so its interface is compatible with pretty much any languages. While a proven solution, Scribe is not something you can deploy quickly as a turnkey. Also, note that Scribe is no longer updated and maintained.

10. Flume: a highly scalable and reliable service to transport and collect large volumes of streaming log data from any clients, and store them in backend storage such as Apache Hadoops’ HDFS.

Lire la suite…

Monitor TCP Traffic on specific port

22/09/2023 un commentaire

Source: superuser.com

I’ve searched quite extensively for this, but cannot seem to come up with a working example.

My objective is to monitor TCP traffic on a specific port to see incoming connections and write them to a text file. The catch is I also need a timestamp on each row to show exactly when the client connected down to the second.

I’ve already exhausted netstat, nmap, and tcptrack, but none support timestamp.

I was thinking a linux shell script might work if I monitored a specific local port and wrote text to a file when a connection is made then just concatenate the date on each line.

I was playing with this:

netstat -ano|grep 443|grep ESTABLISHED

as well as this:

tcptrack -i eth0 port 443

but neither suit my needs as I need the time the connection comes in at.

Categories: Réseau Tags:

Nmon – A nifty little tool to monitor system resources on Linux

19/09/2023 Comments off

Source: binarytides.com

Nmon

Nmon (Nigel’s performance Monitor for Linux) is another very useful command line utility that can display information about various system resources like cpu, memory, disk, network etc. It was developed at IBM and later released open source.

It is available for most common architectures like x86, ARM and platforms like linux, unix etc. It is interactive and the output is well organised similar to htop.

Using Nmon it is possible to view the performance of different system resources on a single screen.
The man page describes nmon as

nmon is is a systems administrator, tuner, benchmark tool. It can display the CPU, memory, network, disks (mini graphs or numbers), file systems, NFS, top processes, resources (Linux version & processors) and on Power micro-partition information.

Project website
http://nmon.sourceforge.net/

Install Nmon

Debian/Ubuntu type distros have nmon in the default repos, so grab it with apt.

$ sudo apt-get install nmon

Fedora users can get it with yum

$ sudo yum install nmon

CentOS users need to install nmon from rpmforge/repoforge repository. It is not present in Epel.
Either download the correct rpm installer from

http://pkgs.repoforge.org/nmon/

Or setup the rpmforge repository by following the instructions here
http://wiki.centos.org/AdditionalResources/Repositories/RPMForge

And then install using yum

$ sudo yum install nmon

Lire la suite…

Categories: Système Tags: ,