Archive

Articles taggués ‘administration’

Track Multiple Files Simultaneously With MultiTail

07/06/2016 Comments off

https://www.dbsysnet.com/wp-content/uploads/2016/06/multitail-gnome-terminal.pngThe tail utility is one of the most useful tools an admin has — but it’s also a bit limited and dated. For watching two or more logs at once, and much more, you want MultiTail.

The tail utility is one of the most useful admin tools, but it’s limited and outdated. Bring your log-watching capabilities into the 21st century and view multiple logs at one time with MultiTail.

What’s MultiTail? It’s an ncurses utility that can display multiple files using « windows » (much like GNU Screen or Tmux) in a terminal or at the console. It also supports color highlighting, filtering and much more.

To get MultiTail, head over to the download page or see if your operating system already has packages. On Debian-based systems, you should need to look for only the multitail package. The project released an update (5.2.8) on April 14, so the most recent release probably won’t be in your upstream package repo just yet.

Once it’s installed, run multitail and hit F1. It will pop up a help menu with all of its keybindings. You’ll need to scroll down (use the down arrow key) to see all of the commands. It can be deceptive, otherwise, and it looks like you just have a few commands. Let’s look at a couple of the commands you’ll want to start with.

First, run t — this displays the stats for your instance of MultiTail. To add files, use the a command.

If you want to start multitail with a file or output of a command to access, use multitailfilename or multitail filename1 filename2 for more than one file. Use multitail -R 3 -l "command" -R 3 -l "command2" to see two commands displayed in one window.

Using the h command within MultiTail, you can set the height of each window. If you want to search through a window, use / or shift / to highlight the search string. Using I will toggle case sensitivity.

When you’ve used the search feature, you’ll get a buffer that displays in a « window » above the file. You can write this to a file using s, which will bring up a dialog that prompts for a filename to which to save.

If you get a MultiTail session configured just right you can save it for later using w from within the session. It will prompt you for a filename to which to save the script.

Basically, you can do just about anything you’d be able to do with tail and then some. It’s very interactive, and much of its commands have dialogs that will walk you through creating regular expressions or re-arranging windows and more.

The only caveat I have is that, occasionally, MultiTail is a bit crashy. Not wholly unreliable, but I have managed to crash MultiTail a few times while putting it through its paces. (I’ve never managed to crashtail…). But it’s still an invaluable tool to have around for any Linux or UNIX admin.

Joe ‘Zonker’ Brockmeier is a freelance writer and editor with more than 10 years covering IT. Formerly the openSUSE Community Manager for Novell, Brockmeier has written for Linux Magazine, Sys Admin, Linux Pro Magazine, IBM developerWorks, Linux.com, CIO.com, Linux Weekly News, ZDNet, and many other publications. You can reach Zonker at jzb@zonker.net and follow him on Twitter.

Source: ServerWatch

How to set up web-based network traffic monitoring system on Linux

25/05/2016 Comments off

When you are tasked with monitoring network traffic on the local network, you can consider many different options to do it, depending on the scale/traffic of the local network, monitoring platforms/interface, types of backend database, etc.

ntopng is an open-source (GPLv3) network traffic analyzer which provides a web interface for real-time network traffic monitoring. It runs on multiple platforms including Linux and MacOS X. ntopng comes with a simple RMON-like agent with built-in web server capability, and uses Redis-backed key-value server to store time series statistics. You can install ntopng network traffic analyzer on any designated monitoring server connected to your network, and use a web browser to access real-time traffic reports available on the server.

In this tutorial, I will describe how to set up a web-based network traffic monitoring system on Linux by using ntopng.

Features of ntopng

  • Flow-level, protocol-level real-time analysis of local network traffic.
  • Domain, AS (Autonomous System), VLAN level statistics.
  • Geolocation of IP addresses.
  • Deep packet inspection (DPI) based service discovery (e.g., Google, Facebook).
  • Historical traffic analysis (e.g., hourly, daily, weekly, monthly, yearly).
  • Support for sFlow, NetFlow (v5/v9) and IPFIX through nProbe.
  • Network traffic matrix (who’s talking to who?).
  • IPv6 support.

Install ntopng on Linux

The official website offers binary packages for Ubuntu and CentOS. So if you use either platform, you can install these packages.

If you want to build the latest ntopng from its source, follow the instructions below. (Update: these instructions are valid for ntopng 1.0. For ntopng 1.1 and higher, see the updated instructions).

To build ntopng on Debian, Ubuntu or Linux Mint:

$ sudo apt-get install libpcap-dev libglib2.0-dev libgeoip-dev redis-server wget libxml2-dev
$ tar xzf ntopng-1.0.tar.gz -C ~
$ cd ~/ntopng-1.0/
$ ./configure
$ make geoip
$ make

In the above steps, « make geoip » will automatically download a free version of GeoIP databases with wget from maxmind.com. So make sure that your system is connected to the network.

To build ntopng on Fedora:

$ sudo yum install libpcap-devel glib2-devel GeoIP-devel libxml2-devel libxml2-devel redis wget
$ tar xzf ntopng-1.0.tar.gz -C ~
$ cd ~/ntopng-1.0/
$ ./configure
$ make geoip
$ make

To install ntopng on CentOS or RHEL, first set up EPEL repository, and then follow the same instructions as in Fedora above.

Configure ntopng on Linux

After building ntopng, create a configuration directory for ntopng, and prepare default configuration files as follows. I assume that « 192.168.1.0/24 » is the CIDR address prefix of your local network.

$ sudo mkir /etc/ntopng -p
$ sudo -e /etc/ntopng/ntopng.start
--local-networks "192.168.1.0/24"
--interface 1
$ sudo -e /etc/ntopng/ntopng.conf
-G=/var/run/ntopng.pid

Lire la suite…

How to monitor a Linux server and desktop remotely from web browser

24/05/2016 Comments off

Monitoring Linux server

When it comes to monitoring a Linux server, there are more than enough options to choose from. While there are many production-quality monitoring solutions (e.g., Nagios, Zabbix, Zenoss), boasting of fancy UI, monitoring scalability, comprehensive reporting capabilities, etc., these solutions are probably an overkill for most of us end users. If all you need is to check the basic status (e.g., CPU load, memory usage, active processes, disk usage) of a remote Linux server or desktop, consider linux-dash.

linux-dash is a web-based lightweight monitoring dashboard for Linux machines, which can display, in real-time, various system properties, such as CPU load, RAM usage, disk usage, Internet speed, network connections, RX/TX bandwidth, logged-in users, running processes etc. linux-dash does not come with any backend database for storing long-term statistics. Simply drop in linux-dash app in an existing web server (e.g., Apache, Nginx), and you are good to go. It is a quick and easy way to set up remote monitoring for personal projects.

In this tutorial, I am going to describe how to set up linux-dash in Nginx web server on Linux. Nginx is preferred over Apache web server due to its lightweight engine.

Set up linux-dash on Debian, Ubuntu or Linux Mint

First, install Nginx web server with php-fpm.

$ sudo apt-get install git nginx php5-json php5-fpm php5-curl

Configure Nginx for linux-dash app by creating /etc/nginx/conf.d/linuxdash.conf as follows. In this example, we are going to use port 8080.

$ sudo vi /etc/nginx/conf.d/linuxdash.conf
server {
 server_name $domain_name;
 listen 8080;
 root /var/www;
 index index.html index.php;
 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;
 
 location ~* .(?:xml|ogg|mp3|mp4|ogv|svg|svgz|eot|otf|woff|ttf|css|js|jpg|jpeg|gif|png|ico)$ {
 try_files $uri =404;
 expires max;
 access_log off;
 add_header Pragma public;
 add_header Cache-Control "public, must-revalidate, proxy-revalidate";
 }
 
 location /linux-dash {
 index index.html index.php;
 }
 
 # PHP-FPM via sockets
 location ~ .php(/|$) {
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 fastcgi_split_path_info ^(. ?.php)(/.*)$;
 fastcgi_pass unix:/var/run/php5-fpm.sock;
 if (!-f $document_root$fastcgi_script_name) {
 return 404;
 }
 try_files $uri $uri/ /index.php?$args;
 include fastcgi_params;
 }
}

Disable the default site configuration.

$ sudo rm /etc/nginx/sites-enabled/default

Configure php-fpm by editing /etc/php5/fpm/pool.d/www.conf. Make sure to edit « user« , « group » and « listen » directives as shown below. You can keep the rest of the configuration unchanged.

$ sudo vi /etc/php5/fpm/pool.d/www.conf

. . .
user = www-data
group = www-data
listen = /var/run/php5-fpm.sock
. . .

Proceed to download and install linux-dash.

$ git clone https://github.com/afaqurk/linux-dash.git
$ sudo cp -r linux-dash/ /var/www/
$ sudo chown -R www-data:www-data /var/www

Restart Nginx web server as well as php5-fpm to finalize installation.

$ sudo service php5-fpm restart
$ sudo service nginx restart

Lire la suite…

How to monitor a log file on Linux with logwatch

06/04/2016 Comments off

Linux operating system and many applications create special files commonly referred to as « logs » to record their operational events. These system logs or application-specific log files are an essential tool when it comes to understanding and troubleshooting the behavior of the operating system and third-party applications. However, log files are not precisely what you would call « light » or « easy » reading, and analyzing raw log files by hand is often time-consuming and tedious. For that reason, any utility that can convert raw log files into a more user-friendly log digest is a great boon for sysadmins.

logwatch is an open-source log parser and analyzer written in Perl, which can parse and convert raw log files into a structured format, making a customizable report based on your use cases and requirements. In logwatch, the focus is on producing more easily consumable log summary, not on real-time log processing and monitoring. As such, logwatch is typically invoked as an automated cron task with desired time and frequency, or manually from the command line whenever log processing is needed. Once a log report is generated, logwatch can email the report to you, save it to a file, or display it on the screen.

A logwatch report is fully customizable in terms of verbosity and processing coverage. The log processing engine of logwatch is extensible, in a sense that if you want to enable logwatch for a new application, you can write a log processing script (in Perl) for the application’s log file, and plug it under logwatch.

One downside of logwatch is that it does not include in its report detailed timestamp information available in original log files. You will only know that a particular event was logged in a requested range of time, and you will have to access original log files to get exact timing information.

Installing Logwatch

On Debian and derivatives:

# aptitude install logwatch

On Red Hat-based distributions:

# yum install logwatch

Configuring Logwatch

During installation, the main configuration file (logwatch.conf) is placed in /etc/logwatch/conf. Configuration options defined in this file override system-wide settings defined in /usr/share/logwatch/default.conf/logwatch.conf.

If logwatch is launched from the command line without any arguments, the custom options defined in /etc/logwatch/conf/logwatch.conf will be used. However, if any command-line arguments are specified with logwatch command, those arguments in turn override any default/custom settings in /etc/logwatch/conf/logwatch.conf.

In this article, we will customize several default settings of logwatch by editing /etc/logwatch/conf/logwatch.conf file.

Detail = <Low, Med, High, or a number>

« Detail » directive controls the verbosity of a logwatch report. It can be a positive integer, or High, Med, Low, which correspond to 10, 5, and 0, respectively.

MailTo = youremailaddress@yourdomain.com

« MailTo » directive is used if you want to have a logwatch report emailed to you. To send a logwatch report to multiple recipients, you can specify their email addresses separated with a space. To be able to use this directive, however, you will need to configure a local mail transfer agent (MTA) such as sendmail or Postfix on the server where logwatch is running.

Range = <Yesterday|Today|All>

« Range » directive specifies the time duration of a logwatch report. Common values for this directive are Yesterday, Today or All. When « Range = All » is used, « Archive = yes » directive is also needed, so that all archived versions of a given log file (e.g., /var/log/maillog, /var/log/maillog.X, or /var/log/maillog.X.gz) are processed.

Besides such common range values, you can also use more complex range options such as the following.

  • Range = « 2 hours ago for that hour »
  • Range = « -5 days »
  • Range = « between -7 days and -3 days »
  • Range = « since September 15, 2014 »
  • Range = « first Friday in October »
  • Range = « 2014/10/15 12:50:15 for that second »

To be able to use such free-form range examples, you need to install Date::Manip Perl module from CPAN. Refer to this post for CPAN module installation instructions.

Service = <service-name-1>
Service = <service-name-2>
. . .

« Service » option specifies one or more services to monitor using logwath. All available services are listed in /usr/share/logwatch/scripts/services, which cover essential system services (e.g., pam, secure, iptables, syslogd), as well as popular application services such as sudo, sshd, http, fail2ban, samba. If you want to add a new service to the list, you will have to write a corresponding log processing Perl script, and place it in this directory.

If this option is used to select specific services, you need to comment out the line « Service = All » in /usr/share/logwatch/default.conf/logwatch.conf.

14948933564_f6fb05382d_o

Format = <text|html>

« Format » directive specifies the format (e.g., text or HTML) of a logwatch report.

Output = <file|mail|stdout>

« Output » directive indicates where a logwatch report should be sent. It can be saved to a file (file), emailed (mail), or shown to screen (stdout).

Lire la suite…

Packet Analyzer: 15 TCPDUMP Command Examples

05/04/2016 Comments off

Packet Analyzer: efficient network traffic analysispacket analyzer

tcpdump command is also called as packet analyzer.

tcpdump command will work on most flavors of unix operating system. tcpdump allows us to save the packets that are captured, so that we can use it for future analysis. The saved file can be viewed by the same tcpdump command. We can also use open source software like wireshark to read the tcpdump pcap files.

In this tcpdump tutorial, let us discuss some practical examples on how to use the tcpdump command.

1. Capture packets from a particular ethernet interface using tcpdump -i

When you execute tcpdump command without any option, it will capture all the packets flowing through all the interfaces. -i option with tcpdump command, allows you to filter on a particular ethernet interface.

$ tcpdump -i eth1
14:59:26.608728 IP xx.domain.netbcp.net.52497 > valh4.lell.net.ssh: . ack 540 win 16554
14:59:26.610602 IP resolver.lell.net.domain > valh4.lell.net.24151:  4278 1/0/0 (73)
14:59:26.611262 IP valh4.lell.net.38527 > resolver.lell.net.domain:  26364  PTR? 244.207.104.10.in-addr.arpa. (45)

In this example, tcpdump captured all the packets flows in the interface eth1 and displays in the standard output.

Note: Editcap utility is used to select or remove specific packets from dump file and translate them into a given format.

2. Capture only N number of packets using tcpdump -c

When you execute tcpdump command it gives packets until you cancel the tcpdump command. Using -c option you can specify the number of packets to capture.

$ tcpdump -c 2 -i eth0
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:38:38.184913 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457255642:1457255758(116) ack 1561463966 win 63652
14:38:38.690919 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652
2 packets captured
13 packets received by filter
0 packets dropped by kernel

The above tcpdump command captured only 2 packets from interface eth0.

Note: Mergecap and TShark: Mergecap is a packet dump combining tool, which will combine multiple dumps into a single dump file. Tshark is a powerful tool to capture network packets, which can be used to analyze the network traffic. It comes with wireshark network analyzer distribution.

3. Display Captured Packets in ASCII using tcpdump -A

The following tcpdump syntax prints the packet in ASCII.

$ tcpdump -A -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:34:50.913995 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457239478:1457239594(116) ack 1561461262 win 63652
E.....@.@..]..i...9...*.V...]...P....h....E...>{..U=...g.
......G..7 KA....A...L.
14:34:51.423640 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652
E.....@.@....i...9...*.V..*]...P....h....7......X..!....Im.S.g.u:*..O&....^#Ba...
E..(R.@.|.....9...i.*...]...V..*P..OWp........

Note: Ifconfig command is used to configure network interfaces

Lire la suite…

What are useful CLI tools for Linux system admins ?

17/03/2016 Comments off

System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

14853747084_e14cf18e8f_z

This article will present some of the most popular and useful CLI tools recommended for sysadmins in their day to day activities. If you would like to recommend any useful tool which is not listed here, don’t forget to share it in the comment section.

Network Tools

1. ping: Check end-to-end connectivity (RTT delay, jitter, packet loss) of a remote host with ICMP echo/reply. Useful to check system status and reachability.

2. hping: Network scanning and testing tool that can generate ICMP/TCP/UDP ping packets. Often used for advanced port scanning, firewall testing, manual path MTU discovery and fragmentation testing.

3. traceroute: Discover a layer-3 forwarding path from a local host to a remote destination host with TTL-limited ICMP/UDP/TCP probe packets. Useful to troubleshoot network reachability and routing problems.

4. mtr: A variation of traceroute which characterizes per-hop packet loss/jitter with running statistics. Useful to characterize routing path delays.

5. netcat/socat: A swiss army knife of TCP/IP networking, allowing to read/write byte streams over TCP/UDP. Useful to troubleshoot firewall policies and service availability.

6. dig: DNS troubleshooting tool that can generate forward queries, reverse queries, find authoritative name servers, check CNAME, MX and other DNS records. Can be instructed to query a specific DNS server of your choosing.

7. nslookup: Another DNS checking/troubleshooting tool. Works with all DNS queries and records. Can query a particular DNS server.

8. dnsyo: A DNS testing tool which checks DNS propagation by performing DNS lookup from over a number of open resolvers located across 1,500 different networks around the world.

9. lsof: Show information about files (e.g., regular files, pipes or sockets) which are opened by processes. Useful to monitor processes or users in terms of their open network connections or opened files.

10. iftop: A ncurses-based TUI utility that can be used to monitor in real time bandwidth utilization and network connections for individual network interfaces. Useful to keep track of bandwidth hogging applications, users, destinations and ports.

11. netstat: A network statistics utility that can show status information and statistics about open network connections (TCP/UDP ports, IP addresses), routing tables, TX/RX traffic and protocols. Useful for network related diagnosis and performance tuning.

12. tcpdump: A popular packet sniffer tool based on libpcap packet capture library. Can define packet capturing filters in Berkeley Packet Filters format.

13. tshark: Another CLI packet sniffer software with full compatibility with its GUI counterpart, Wireshark. Supports 1,000 protocols and the list is growing. Useful to troubleshoot, analyze and store information on live packets.

14. ip: A versatile CLI networking tool which is part of iproute2 package. Used to check and modifying routing tables, network device state, and IP tunneling settings. Useful to view routing tables, add/remove static routes, configure network interfaces, and otherwise troubleshoot routing issues.

15. ifup/ifdown: Used to bring up or shut down a particular network interface. Often a preferred alternative to restarting the entire network service.

16. autossh: A program which creates an SSH session and automatically restarts the session should it disconnect. Often useful to create a persistent reverse SSH tunnel across restrictive corporate networks.

17. iperf: A network testing tool which measures maximum bi-directional throughput between a pair of hosts by injecting customizable TCP/UDP data streams in between.

18. elinks/lynx: text-based web browsers for CLI-based server environment.

Lire la suite…

How to analyze and view Apache web server logs interactively on Linux

17/03/2016 Comments off

analyze apache logsWhether you are in the web hosting business, or run a few web sites on a VPS yourself, chances are you want to display visitor statistics such as top visitors, requested files (dynamic or static), used bandwidth, client browsers, and referring sites, and so forth.

GoAccess is a command-line log analyzer and interactive viewer for Apache or Nginx web server. With this tool, you will not only be able to browse the data mentioned earlier, but also parse the web server logs to dig for further data as well – and all of this within a terminal window in real time. Since as of today most web servers use either a Debian derivative or a Red Hat based distribution as the underlying operating system, I will show you how to install and use GoAccess in Debian and CentOS.

Installing GoAccess on Linux

In Debian, Ubuntu and derivatives, run the following command to install GoAccess:

# aptitude install goaccess

In CentOS, you’ll need to enable the EPEL repository and then:

# yum install goaccess

In Fedora, simply use yum command:

# yum install goaccess

If you want to install GoAccess from the source to enable further options (such as GeoIP location), install required dependencies for your operating system, and then follow these steps:

# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install

That will install version 0.8.5, but you can always verify what is the latest version in the Downloads page of the project’s web site.

Since GoAccess does not require any further configurations, once it’s installed you are ready to go.

Running GoAccess

To start using GoAccess, just run it against your Apache access log.

For Debian and derivatives:

# goaccess -f /var/log/apache2/access.log

For Red Hat based distros:

# goaccess -f /var/log/httpd/access_log

When you first launch GoAccess, you will be presented with the following screen to choose the date and log format. As explained, you can toggle between options using the spacebar and proceed with F10. As for the date and log formats, you may want to refer to the Apache documentation if you need to refresh your memory.

In this case, Choose Common Log Format (CLF):

15868350373_30c16d7c30

and then press F10. You will be presented with the statistics screen. For the sake of brevity, only the header, which shows the summary of the log file, is shown in the next image:

16486742901_7a35b5df69_b

Lire la suite…

How to measure packets per second or throughput on high speed network interface

17/03/2016 Comments off

throughput networkThere are many traffic monitoring tools available on Linux, which can monitor/classify network traffic, and report real-time traffic statistics in fancy user interfaces. Most of these tools (e.g., ntopng, iftop) are powered by libpcap, which is a packet capture library used to monitor network traffic in user space. Despite their versatility, however, libpcap-based network monitoring tools cannot scale to handle traffic on multi Gigabit rate network interfaces, due to the overhead associated with user-space packet capture.

In this tutorial, I will present simple shell scripts that can monitor network traffic on per-interface basis, without relying on slow libpcap library. These scripts are fast enough to support multi Gigabit rates, but only suitable if you are interested in « aggregate » network statistics on per interface basis.

The secret for the scripts lies in sysfs virtual filesystem which is used by the kernel to export device- or driver-related information to user space. Network interface related statistics are exported via /sys/class/net/<ethX>/statistics.

For example, the statistics on eth0 interface are found in these files:

  • /sys/class/net/eth0/statistics/rx_packets: number of packets received
  • /sys/class/net/eth0/statistics/tx_packets: number of packets transmitted
  • /sys/class/net/eth0/statistics/rx_bytes: number of bytes received
  • /sys/class/net/eth0/statistics/tx_bytes: number of bytes transmitted
  • /sys/class/net/eth0/statistics/rx_dropped: number of packets dropped while received
  • /sys/class/net/eth0/statistics/tx_dropped: number of packets dropped while transmitted

The numbers stored in the files are automatically refreshed in real-time by the kernel. Therefore, you can write scripts that calculate traffic statistics based on these files.

The following are two such scripts (thanks to joemiller). The first script counts the number of packets per second, received (RX) or sent (TX) on an interface, while the latter scripts measures the network bandwidth of incoming (RX) and outgoing (TX) traffic on an interface. For these scripts to work, you do not need to install anything.

Measure Packets per Second on an Interface

#!/bin/bash
INTERVAL="1"  # update interval in seconds
if [ -z "$1" ]; then
        echo
        echo usage: $0 [network-interface]
        echo
        echo e.g. $0 eth0
        echo
        echo shows packets-per-second
        exit
fi

IF=$1

while true
do
        R1=`cat /sys/class/net/$1/statistics/rx_packets`
        T1=`cat /sys/class/net/$1/statistics/tx_packets`
        sleep $INTERVAL
        R2=`cat /sys/class/net/$1/statistics/rx_packets`
        T2=`cat /sys/class/net/$1/statistics/tx_packets`
        TXPPS=`expr $T2 - $T1`
        RXPPS=`expr $R2 - $R1`
        echo "TX $1: $TXPPS pkts/s RX $1: $RXPPS pkts/s"
done
 

Measure Network Bandwidth on an Interface

#!/bin/bash
 
INTERVAL="1" # update interval in seconds
 
if [ -z "$1" ]; then
 echo
 echo usage: $0 [network-interface]
 echo
 echo e.g. $0 eth0
 echo
 exit
fi
 
IF=$1
 
while true
do
 R1=`cat /sys/class/net/$1/statistics/rx_bytes`
 T1=`cat /sys/class/net/$1/statistics/tx_bytes`
 sleep $INTERVAL
 R2=`cat /sys/class/net/$1/statistics/rx_bytes`
 T2=`cat /sys/class/net/$1/statistics/tx_bytes`
 TBPS=`expr $T2 - $T1`
 RBPS=`expr $R2 - $R1`
 TKBPS=`expr $TBPS / 1024`
 RKBPS=`expr $RBPS / 1024`
 echo "TX $1: $TKBPS kB/s RX $1: $RKBPS kB/s"
done
The following screenshot shows the above two scripts in action.

10700906313_cfa3cd1ed7_z

Source: Xmodulo 

How to configure SNMPv3 in Ubuntu, CentOS and Cisco

16/03/2016 Comments off

snmpv3 ubuntuSimple Network Management Protocol (SNMP) is a widely used protocol for gathering information about what is going on within a device. For example, CPU and RAM usage, load on a server, traffic status in a network interface, and many other interesting properties of a device can be queried using SNMP.

Currently, three versions of SNMP are available: v1, v2c and v3. SNMP v1 and v2c can be easily configured, which has been discussed in a previous article. SNMPv3 adds some additional features, including authentication and encryption schemes (e.g., MD5, SHA, AES and DES). This makes SNMPv3 more secure and advisable while you run SNMP queries over the Internet.

SNMPv3 configuration is a bit different compared to SNMP v1 or v2c. The following sections explain in detail how the configuration is done.

Configure SNMPv3 on Ubuntu or Debian

The net-snmp-config tool is used for configuration. The following example creates a read-only SNMPv3 user named ‘snmpv3user‘ with password ‘snmpv3pass‘. Default authentication method MD5 and default encryption DES are used. These values can be customized as well.

root@server:~# apt-get install snmp snmpd
 root@server:~# service snmpd stop
 root@server:~# net-snmp-config --create-snmpv3-user -ro -A snmpv3pass snmpv3user
## OUTPUT ##
adding the following line to /var/lib/snmp/snmpd.conf:
   createUser snmpv3user MD5 "snmpv3pass" DES
adding the following line to /usr/share/snmp/snmpd.conf:
   rouser snmpv3user

root@server:~# service snmpd start

Lire la suite…

Categories: Réseau Tags: ,

What are good open-source log monitoring tools on Linux ?

10/03/2016 Comments off

open source monitoringIn an operating system, logs are all about keeping track of events, be it critical system errors, resource usage warnings, transaction history, application status, or user activities. These logs, which are stored as (text or binary) files in the system, are useful for system auditing, debugging and maintenance. However, with so many different system entities generating log files, and even at growing rate, the challenge as a system admin is to how to « consume » these log files effectively.

That’s when log monitoring tools come into the picture, which streamline the often laborious process of collecting, parsing and analyzing log files, as well as alerting system admins for any interesting events. These tools are designed from ground up focused on log monitoring, so they offer many attractive features, such as scalable log aggregation and filtering, human-readable display, event correlation, visual or email notification, flexible log retention policy, and so on.

In this post, I am going to introduce a list of popular open-source log monitoring software for Linux, ranging from simple log file viewers to full-blown log monitoring frameworks.

Log Aggregation and Filtering

Log monitoring would not be possible without efficient and scalable mechanisms to collect and pre-process log files. Tools in this category focus on shipping, collecting, filtering, indexing and storing log files, so that they can be further analyzed and visualized in subsequent monitoring pipelines.

1. rsyslog: an open-source log collector server which can filter and consolidate log data (based on syslog protocol) from different hosts and devices in the network. rsyslog can be configured as a server or a client, where the former plays the role of a log collector and the latter runs as a log sender.

2. syslog-ng: another open-source implementation of the syslog protocol with more advanced and user-friendly features such as content-based filtering, easier-to-understand config format, and real-time event correlation.

3. systemd journal: systemd journal can be configured for remote journal logging, where locally logged events are forwarded to a remote server over HTTP. In this setup, systemd-journal-upload on a client host serializes and forwards journal messages to systemd-journal-remote running on a remote collector server.

4. logstash: an open-source tool that collects, parses, and stores log files for offline search and analysis. logstash can run in various pipelines due to many plugins supporting different input/output interfaces, decoding/encoding, and filtering rules. Input plugins allows logstash to gather log files from different sources and protocols (e.g., files, S3, RabbitMQ, syslog, collectd, TCP/UDP sockets). Filter/codec plugins are used to parse, convert, modify and add metadata to log files. Output plugins pass processed log files to various target storages (e.g., file, Google cloud storage, Nagios, S3, Zabbix).

5. collectd: a daemon service which gathers various system-level statistics, and stores them for historical analysis or real-time graphing. Similar to logstash, collectd is an extensible architecture, where you can enable various input/output plugins to change its collection behaviors. For log collection, collectd can leverage LogFile and Network plugins to aggregate remote log files.

6. Logster: an open-source utility for parsing log files for any interesting data, and aggregating extracted data into metrics for subsequent reporting and graphing pipelines.

7. Fluentd: a unified log aggregation layer which allows in-stream processing for a variety of streaming data and log files. It comes with a huge plugin ecosystem with more than 300 plugins to support various input sources and output interfaces.

8. Nxlog: a unified log collector and forwarder which supports a variety of log sources, formats and protocols. Advanced features include multi-threaded log collection and processing, message buffering and prioritization, built-in log rotation, and TLS/SSL transport.

9. Scribe:: a scalable log collector server developed by Facebook. Scribe can aggregate log data which is streamed in real time from a large number of clients. It uses Apache Thrift for protocol encoding, so its interface is compatible with pretty much any languages. While a proven solution, Scribe is not something you can deploy quickly as a turnkey. Also, note that Scribe is no longer updated and maintained.

10. Flume: a highly scalable and reliable service to transport and collect large volumes of streaming log data from any clients, and store them in backend storage such as Apache Hadoops’ HDFS.

Lire la suite…